About Me

Hello! I'm Ankush Singh Bhardwaj, an enthusiastic Robotics Engineer with a deep-seated passion for Robotics. Currently, I am honing my skills through a Master's program in Robotics Engineering at Worcester Polytechnic Institute. My academic foundation is rooted in a Bachelor's degree in Mechanical (Mechatronics) Engineering from Jawaharlal Nehru Technological University, India, where I achieved distinction with the University Gold Medal in 2020.

My expertise encompasses a broad spectrum of robotics, including deep learning, Vision, motion planning, controls and the dynamics and control of robotic systems, with a special focus on aerial robotics. I bring a rich skill set in advanced technical tools such as ROS,ROS 2, GAZEBO, Behavioural Trees, Python, C++, PyTorch, Tensorflow, and MATLAB.

Education

Masters in Robotics Engineering

August 2022 - Present
Worcester Polytechnic Institute
Worcester, MA, USA
MER logo
Coursework: Deep Learning, Hands on Autonomous Aerial Vehicles, Robot Controls, Motion Planning, Legged Robotics and Robot Dynamics.

Bachelor Of Technology in Mechanical(Mechatronics) Engineering

August 2016 - September 2020
Jawaharlal Nehru Technological Institute
Hyderabad, TS, India
MER logo
University Gold Medal - 2020 Best Outgoing Student , Recipient of Sri Andhra Kesari Tanguturi Prakasam Pantulu Gold Medal 2016-19
Coursework: Industrial Robotics, Automation in Manufacturing, Micro Processors and Micro Controllers, Motion Control Design, Switching Theory and Logic Design and Automotive Engineering.

Experience

Research Assistant

May 2023 - Present
Manipulation and Environmental Robotics Lab
Worcester Polytechnic Institute
MER logo
I am currently immersed in two innovative projects at our laboratory, focusing on advancing robotic manipulation and dexterous picking, where we develop sophisticated algorithms for generating effective grasps and enhancing the precision and adaptability of robotic manipulators for complex tasks:
1. Benchmarking for Robotic Manipulation
2. Dexterous Picking
**Please refer to the Research section of this website to read more about my work at the lab.

Assistant System Engineer

Feb 2021 - Feb 2022
Tata Consultancy Services Ltd.
Mumbai, India
MER logo
Trained as a full stack developer. Worked for a client on MAINFRAMES system.
Technologies used: COBOL | JCL | SQL | DB2 | MFS | AWS | JAVA
My tasks and responsibilities included:
  • Developing and maintaining codes as per the requirement of the client. Helped in creating codes to add new partners in existing transaction systems, update databases efficiently through one-time programs and creating/ updating the code to get the desired credit files for partners.
  • Analyzing codes to check for feasibility of adding new features and implementing them by not impacting the codes existing functionality.
  • Performing Unit/ System testing of codes before pushing them to production. To find any critical errors or bugs and rectifying them if any.
  • Communicating with the client directly to resolve any production issue, understanding their requirements and expectations to complete given tasks efficiently.
  • Managing and guiding new joiners on the software, conducting KT sessions on a regular basis.

Graduate Assistant

Student Advocacy and Programming
Aug 2022 - Present
Office of Diversity, Inclusion and Multicultural Education
Worcester Polytechnic Institute
MER logo
  • Collaborated in designing and promoting inclusive programs and support services, fostering equity and acceptance for all students, while partnering with affinity organizations to develop and implement educational strategies.
  • Supported the ODIME team in crafting effective marketing strategies and producing original digital content for branding and promotion across social media platforms.

Research

MER Lab Logo

Research Assistant

Manipulation and Environmental Laboratory

Under the Supervision of Prof. Berk Calli

Dexterous Picking

Collaborators:

Harvard University Logo
Amazon Robotics Logo
Model for Dexterous Picking

This image illustrates the Model O Underactuated Gripper Three Finger Gripper.
Source: https://www.eng.yale.edu/grablab/openhand/model_o.html

  • Developing robotic capabilities to mimic complex human dexterity in manipulating challenging objects.
  • Implementing techniques that allow robots to adaptively use skills like sliding or flipping thin objects for efficient manipulation.
  • I was focussed on developing techniques to reconfigure the dexterous gripper during ongoing grasp.
Reconfiguring Dexterous Gripper
  • Dexterous Gripper Reconfiguration: Specializing in the reconfiguration of the Model O underactuated three-finger gripper, developed by the OpenHand Project at Yale.
  • Kinematic Analysis Expertise: Conducted comprehensive kinematic analysis of the dexterous gripper to develop an effective velocity Jacobian.
  • Precision Control Implementation: Utilized a PD controller for accurate reading of encoder values from the gripper, enabling precise manipulation.
  • Desired Pose Achievement: Expertly employed the velocity Jacobian to attain a desired pose for the gripper during complex grasping operations.
  • Gripper Deformation Management: Developed a novel approach for gripper reconfiguration, particularly useful in scenarios where the gripper deforms upon crashing into objects.
  • Innovative Grasping Solutions: Pioneered methods to enhance the functionality and adaptability of robotic grippers in challenging manipulation tasks.
Image 1 Image 2

Benchmarking Robotic Grasping Algorithms

Collaborators:

UMass Lowell Logo

This video demonstrates the benchmarking process using the Franka Emika Panda robotic arm, showcasing the evaluation of mask based grasping algorithm.

  • Addressing the crucial need for standard benchmarks in robotics manipulation to foster field advancement.
  • Devised systematic benchmarks for the evaluation of robotic grasping algorithms.
  • Conducting a benchmark study using the Franka Emika Panda robotic arm and Intel RealSense camera.
  • Experimenting with diverse grasping algorithms, including the application of Point Cloud Libraries (PCL), generative neural networks, and deep learning methods on the YCB object and model dataset.
grasps generated by various algorithms

This collage showcases a comparative study of grasping algorithms: the first column features grasps generated by a mask-based algorithm, the second by ResNet, and the third by GGCNN (Generative Grasping Convolutional Neural Network).

Rope PCl
Duct PCl

The points with red and pink colour represent the grasp points which are generated using the Point Clouds.

Behaviourial Trees
  • Utilizing FlexBe ROS for the design and implementation of Behavioral Trees, enhancing the benchmarking pipeline's consistency and quality in robotic system evaluations.
  • Developing precise and autonomous individual states within Behavioral Trees to improve the functionality of robotic arms for complex task execution.
  • Expanding the evaluation pipeline for public accessibility, contributing to the democratization of advanced robotic benchmarking resources.
Behavioral Tree

The image shows the Behaviour tree developed for the benchmarking pipeline.

Projects

** Please click on the individual project titles or Read More to look at the outputs and detailed explaination of each project.

Sim2Real Window Navigation in Autonomous Drone Racing

This project involved navigating a DJI Tello nano drone through a complex track with simulated and real windows, utilizing deep learning for window detection, and real-world coordinate mapping with PnP algorithms. It featured a blend of classical computer vision techniques and innovative planning and control strategies, tested in both simulated and real environments to demonstrate the drone's autonomous navigation capabilities.

Read More

UNKNOWN GAP NAVIGATION FOR MOBILE ROBOTS: REAL-TIME DETECTION AND PRECISION MANEUVERING

The "Unknown Gap Navigation for Mobile Robots" project focused on navigating mobile robots through irregular gaps using real-time optical flow and strategic visual servoing. It involved overcoming hardware limitations of small size mobile robots like DJI Tello drones with the NVIDIA Jetson Orin Nano for advanced processing. The project's effectiveness was validated through testing in the Blender environment and live demonstrations, proving its applicability in real-world scenarios like search and rescue missions.

Read More

Sensor Fusion in IMU Pose Estimation: Integrating Complementary, Madgwick, and Unscented Kalman Filters

The project involved integrating Complementary, Madgwick, and Unscented Kalman Filter to accurately determine the 3D orientation of an IMU. It focused on calibrating and aligning sensor data against Vicon motion capture systems, ensuring precise estimations. The project highlighted the superior performance of the Unscented Kalman Filter in pose accuracy, demonstrating the effectiveness of advanced sensor fusion techniques in complex orientation tracking.

Read More

Robust Sliding Mode Control for Precision Trajectory Tracking

The project harnessed Sliding Mode Control (SMC) to master complex trajectory tracking with the Crazyflie 2.0 micro air vehicle (simulation). It entailed designing robust control laws and integrating them with ROS for precise navigation and stability, even in the presence of external disturbances. Successful trials underscored the effectiveness of the control approach, marked by the UAV's smooth flight path adherence and minimal trajectory deviation.

Read More

Real Time Instance Segmentation using Modified YOLACT

The project enhanced YOLACT's real-time instance segmentation by integrating a modified ResNet backbone with Channel Attention Models (CAM) and a novel contextual loss function, significantly improving segmentation performance. The refined model demonstrated superior precision, evidenced by a 12% increase in mean average precision (mAP) and enhanced MaskIOU. Rigorous performance evaluations confirmed the model's efficacy in real-time applications, achieving 18.1 fps in video segmentation and showcasing marked improvements in complex scenario handling.

Read More

Dynamic Motion Planning for Autonomous Navigation : Implementing Advanced Algorithms for Real-time Navigation

The project focused on optimizing advanced motion planning algorithms, including RRTx (Rapidly-exploring random trees), OP-RRT (Obstacle Potential - RRT), and OP-PRM ((Obstacle Potential - Probablistic Road Map)), for dynamic navigation in mobile robotics. Utilizing the ROS Husky simulation platform, it addressed real-time path planning challenges in unpredictable environments, demonstrating improved navigation efficiency and safety. Extensive testing in the Gazebo simulation environment validated the effectiveness of these algorithms in scenarios like obstacle avoidance and narrow corridor traversal.

Read More

Optimization-Based Kinematic Calibration of Hexapod Stewart Platform: Enhancing Precision through Boundary-Conscious Configurations

The project "Optimization-Based Kinematic Calibration of Hexapod Stewart Platform" involved precise calibration of the hexapod manipulator using advanced optimization techniques and strategic configuration selection near the workspace boundary. It featured robust kinematic modeling, integration of velocity Jacobian, and meticulous validation, ensuring high precision in robotic maneuvers. The successful application of least squares optimization and workspace boundary analysis culminated in significantly enhanced calibration accuracy, demonstrated through detailed visualizations.

Read More

Sim2Real Window Navigation in Autonomous Drone Racing

  • Drone Racing Challenge: Utilized DJI Tello nano drone for navigating a complex track featuring windows, an unknown irregular gap, and a dynamic window.
    First Image Description

    A photo of the window on the track in simulation.

    Second Image Description

    The real Drone Racing Track.

  • Window Detection with Deep Learning: Employed the drone's monocular camera along with a custom U-Net architecture to run on NVIDIA Jetson Orin Nano for precise window segmentation.
  • Sim2Real Transition: Leveraged Blender for simulating realistic environments, enhancing the drone's adaptability from virtual to real-world settings.
Windows on Blender

The image shows a sample dataset of windows created on Blender.

  • Classical Computer Vision Techniques: Post-U-Net segmentation, applied classical computer vision methods for accurate localization of window corners.
  • Perspective-n-Point (PnP) Application: Implemented PnP algorithms for real-world coordinate determination, ensuring safe navigation through windows.
  • Pre-Known Window Locations: Integrated approximate window location knowledge for efficient and accurate real-time image processing.

Planning, Control, and Integration Stack

  • 3D Pose Estimation: Employed camera calibration and PnP techniques to ascertain the drone's 3D pose relative to windows.
Windows on Blender

Camera calibration performed using MATLAB, the intrinsic and extrensics were obtained.

  • Navigation Strategy: Devised RRT* and created quinitc trajectories to plot a path through the closest window, leveraging real-time position and orientation data.

Hardware Integration and Computational Approach

  • Drone Hardware Limitations: Utilized a nano drone (DJI Tello), which lacks onboard computing capabilities.
  • Computing with Jetson Orin Nano: Employed NVIDIA Jetson Orin Nano for processing and running neural networks, communicating with the drone via WiFi.
  • Neural Network Design: Developed a lightweight neural network optimized for real-time inferences, crucial for effective autonomous navigation.

Live Demonstration and Testing

Windows on Blender

The image shows real time inferences obtained by the UNet architecture, while the drone was in motion.

  • Successfully showcased the autonomous navigation of DJI Tello through a windowed track in a live demonstration.
  • Displayed real-time detections and 3D pose estimations, validating the effectiveness of the developed sim2real approach.
  • Executed the trajectory through the windows after obtaing the 3D pose of windows and estimating a safe way point.

This video demonstrates the DJI tello drone taking live inference and passing through the window.

Unknown Gap Navigation for Mobile Robots: Real-Time Detection and Precision Maneuvering

Windows on Blender

The image shows an irregular and unknown gap to be crossed.

  • Challenging Drone Navigation: Addressed the complexities of flying through unpredictable windows in real-world environments, simulating scenarios common in search and rescue, and reconnaissance missions.
  • Hardware Adaptation: Overcame the computing limitations of the DJI Tello nano drone by integrating NVIDIA Jetson Orin Nano for advanced computational tasks and neural network processing.
  • Innovative Perception Approach - Optical Flow: Implemented SPyNET for real-time optical flow computation combined with Canny edge detection, enabling the drone to effectively locate and assess the largest gap within its field of view.
  • Testing: testing is conducted on four pre-defined window shapes in the Blender environment. This approach allowed for the experimentation of different window textures and shapes using provided assets. The effectiveness of the drone's detection capability was measured using the Intersection over Union (IoU) metric.
First Image Description

A photo of arbitrary shape with different textures for testing created on Blender.

Second Image Description

The Optical Flow visualized after running it through SPyNET.

Third Image Description

Comparision the contour obtained with the Ground truth using IoU metric.

  • Strategic Visual Servoing: Executed visual servoing techniques to enhance UAV maneuverability and precision in navigating through detected gaps.
  • Autonomous Flight and Control: Developed and integrated comprehensive planning and control algorithms, facilitating autonomous drone flight through various window challenges with high accuracy and safety.
  • Practical Demonstration Success: Demonstrated the effectiveness of the gap detection and visual servoing methodologies in a live setting, proving the concept’s practicality for real-world applications.
First Image Description

Photo of the irregular gap taken by DJI Tello on flight needed to be detected during demonstration.

Second Image Description

Visualization of the optical flow calculated from real time inference. The red dot indicates the center of the largest contour.

This video demonstrates the DJI tello drone detecting an unknown gap and Maneuvering through it.

Sensor Fusion in IMU Pose Estimation: Integrating Complementary, Madgwick, and Unscented Kalman Filters

  • Comprehensive IMU Pose Estimation: Implemented three advanced filters - Complementary, Madgwick, and Unscented Kalman Filter - for 3D orientation estimation of an IMU, leveraging accelerometer and gyroscope data.
  • Data Calibration: Utilized ArduIMU+ V2, a 6-DoF sensor, and calibrated IMU data against Vicon motion capture system for ground truth comparison, ensuring accurate sensor readings.
  • Filter Comparison and Analysis: Analyzed and compared the effectiveness of each filter with Vicon data, with UKF demonstrating the highest accuracy in estimating the IMU's pose.
  • Sensor Fusion Techniques: Explored sensor fusion methodologies through these filters to refine the estimation of the IMU's 3D orientation, emphasizing on the fusion of accelerometer and gyroscope data for enhanced accuracy.
  • Visualization and Verification: Used rotplot.py for visualizing the orientation outputs of the filters, aligning and verifying these outputs with the Vicon system's data for accurate pose estimation validation.
  • Effective Visualization: Employed rotplot.py for detailed orientation visualization, aligning IMU estimations with ground truth for robust validation.
Windows on Blender
Windows on Blender

The images show comparision of Complementary, Madgwick and Unscented Kalman Filter against the Vicon Ground Truth Data.

This video demonstrates the rotplot visualization of the Unscented Kalman Filter.

Robust Sliding Mode Control for Precision Trajectory Tracking

  • Project Overview: Developed a robust control scheme for the Crazyflie 2.0 quadrotor, focusing on precise trajectory tracking amidst external disturbances using MATLAB and ROS.
  • Crazyflie 2.0 MAV: Utilized the lightweight (27 grams) Crazyflie 2.0 (simulation) micro air vehicle, equipped with coreless DC motors, for dynamic and responsive flight control experiments.
  • Dynamic Model and Sliding Mode Control: Modeled quadrotor dynamics and implemented a sliding mode control approach for effective altitude and attitude adjustments to adhere to the desired trajectories.
  • Trajectory Generation: Generated quintic trajectories for the Crazyflie's translational coordinates, ensuring smooth waypoint navigation with zero velocity and acceleration at each point.
  • Control Law Implementation: Designed boundary layer-based sliding mode control laws for precise altitude and attitude control, enabling efficient waypoint tracking.
  • ROS Integration for Real-time Evaluation: Developed a ROS node in Python or MATLAB to simulate and evaluate the control design's performance in the Gazebo environment, ensuring real-world applicability and robustness.
  • Successful Results: Achieved smooth and stable quadrotor movement with minimal overshoot or oscillations, confirming the effectiveness of the control script in trajectory tracking.
Windows on Blender

The video and the graph show us that the UAV converges to the reference trajectory in fraction of seconds and the error between the desired and actual trajectories is minimal.

Real Time Instance Segmentation using Modified YOLACT

  • Advanced Instance Segmentation: Developed an enhanced real-time segmentation model based on YOLACT by implementing Mask R-CNN with custom configurations and backbone architecture, integrating Feature Pyramid Networks (FPNs).
  • YOLACT Architecture Enhancement: Improved the YOLACT framework's segmentation performance and prediction confidence by modifying the ResNet backbone with Channel Attention Models (CAM), specifically tailored for complex segmentation scenarios.
  • Contextual Loss Function Integration: Incorporated a novel contextual loss function, leading to a significant increase in mean average precision (mAP) by 12 percent and better MaskIOU, ensuring refined segmentation accuracy.
  • Channel Attention Mechanism: Adopted the channel attention mechanism inspired by "Squeeze-and-Excitation Networks" to adaptively recalibrate feature maps channel-wise, thereby enhancing feature extraction and model responsiveness.
  • Comprehensive Segmentation Analysis: Utilized advanced analytical techniques, including Global Average and Max Pooling followed by Fully Connected layers with ReLU activation, to optimize channel attention and further refine feature map outputs.
  • Performance Evaluation: Conducted rigorous testing and performance evaluation, demonstrating that the modified YOLACT model achieved higher mAP compared to the traditional YOLACT, despite a minor trade-off in frames per second (fps).
Windows on Blender
  • Visual Proof of Concept: Showcased the model's enhanced prediction capability and segmentation accuracy through visual before-and-after comparisons, highlighting the effectiveness of the channel attention model and the contextual loss function.

Before

Windows on Blender

After

Windows on Blender
  • Real-Time Video Segmentation Achievement: Successfully attained real-time instance segmentation on videos, achieving an operational speed of 18.1 fps, thereby demonstrating the model's applicability and efficiency in dynamic, real-world environments.
  • RRTx Husky Robot Implementation

    We acknowledge that this video has been sourced from the public domain and has been used by us to test our instance segmentation network. We do not claim any ownership or credit for its content.

    Dynamic Motion Planning for Autonomous Navigation : Implementing Advanced Algorithms for Real-time Navigation

    • Project Overview: Focused on optimizing motion planning algorithms such as RRTx, OP-RRT and OP-PRM to enable robust navigation of mobile robots in dynamic environments using the ROS Husky simulation platform.
    • Complex Environment Navigation: Addressed the challenges of navigating through dynamic environments with unpredictable obstacles and changing conditions, crucial for applications in autonomous vehicles, robotics, industrial automation, service robots, drones, and logistics.
    • Algorithm Implementation and Comparison: Implemented and compared the efficiency of sampling-based algorithms like OP-PRM, Risk-DTRRT, and RRTx in navigating narrow passages and dynamic scenarios, using ROS Husky as a testbed.
    • Real-time Path Planning: Enhanced real-time trajectory generation and path planning capabilities by integrating offline and reactive path-planning algorithms, with a focus on improving safety and efficiency in autonomous navigation.
    • Algorithmic Innovations: Developed innovative solutions like obstacle potential-based PRM (OP-PRM) for effective sampling in narrow corridors and integrated APF sampling for improved navigation in cluttered environments.
    • Extensive Testing and Evaluation: Conducted rigorous testing of algorithms in simulated environments using ROS Husky, focusing on obstacle avoidance, narrow corridor traversal, dynamic goal tracking, and goal re-planning to validate the performance of the proposed solutions.
    • Simulation and Real-time Performance: Utilized the Gazebo simulation environment to test and evaluate motion planning algorithms, ensuring the practical applicability of the control schemes for real-world dynamic navigation challenges.

    The video demonstrates OP-RRT in a dynamic 2D environment.

    The video demonstrates the implementation of RRTx on a Husky Robot in a dynamic environment.

    Optimization-Based Kinematic Calibration of Hexapod Stewart Platform: Enhancing Precision through Boundary-Conscious Configurations

    • Optimization-Based Calibration: Executed precise calibration of the hexapod Stewart platform, employing advanced optimization methods to refine kinematic accuracy.
    • Strategic Configuration Selection: Chose measurement configurations close to the workspace boundary to enhance error observability, ensuring superior calibration results.
    RRTx Husky Robot Implementation

    The photo shows random configuration point choosen in the workspace at different heights.

    • Least Squares Optimization: Utilized the leat square optimization to minimize the cost function effectively, achieving highly accurate parameter identification.
    RRTx Husky Robot Implementation
    • Complex Kinematic Modeling: Implemented Inverse and Forward Kinematics modeling, leveraging nominal and simulated real values to calculate the pose and leg lengths.
    • Velocity Jacobian Integration: Incorporated Jacobian matrices in both Nominal and Real Forward Kinematics to ensure dynamic response accuracy.
    • Workspace Boundary Analysis: Employed 'config.m' to compute points near the workspace boundary, ensuring comprehensive calibration across the entire operational range.
    • Calibration and Validation: Conducted thorough calibration using custom-developed MATLAB functions, followed by validation against predetermined configurations to ensure adherence to precise operational standards.
    • Result Visualization: Presented calibration results through descriptive visualizations, highlighting the accuracy and effectiveness of the calibration process.
    RRTx Husky Robot Implementation

    Comparitive result of errors before and after calibration.

    Elements

    Text

    This is bold and this is strong. This is italic and this is emphasized. This is superscript text and this is subscript text. This is underlined and this is code: for (;;) { ... }. Finally, this is a link.


    Heading Level 2

    Heading Level 3

    Heading Level 4

    Heading Level 5
    Heading Level 6

    Blockquote

    Fringilla nisl. Donec accumsan interdum nisi, quis tincidunt felis sagittis eget tempus euismod. Vestibulum ante ipsum primis in faucibus vestibulum. Blandit adipiscing eu felis iaculis volutpat ac adipiscing accumsan faucibus. Vestibulum ante ipsum primis in faucibus lorem ipsum dolor sit amet nullam adipiscing eu felis.

    Preformatted

    i = 0;
    
    while (!deck.isInOrder()) {
        print 'Iteration ' + i;
        deck.shuffle();
        i++;
    }
    
    print 'It took ' + i + ' iterations to sort the deck.';

    Lists

    Unordered

    • Dolor pulvinar etiam.
    • Sagittis adipiscing.
    • Felis enim feugiat.

    Alternate

    • Dolor pulvinar etiam.
    • Sagittis adipiscing.
    • Felis enim feugiat.

    Ordered

    1. Dolor pulvinar etiam.
    2. Etiam vel felis viverra.
    3. Felis enim feugiat.
    4. Dolor pulvinar etiam.
    5. Etiam vel felis lorem.
    6. Felis enim et feugiat.

    Icons

    Actions

    Table

    Default

    Name Description Price
    Item One Ante turpis integer aliquet porttitor. 29.99
    Item Two Vis ac commodo adipiscing arcu aliquet. 19.99
    Item Three Morbi faucibus arcu accumsan lorem. 29.99
    Item Four Vitae integer tempus condimentum. 19.99
    Item Five Ante turpis integer aliquet porttitor. 29.99
    100.00

    Alternate

    Name Description Price
    Item One Ante turpis integer aliquet porttitor. 29.99
    Item Two Vis ac commodo adipiscing arcu aliquet. 19.99
    Item Three Morbi faucibus arcu accumsan lorem. 29.99
    Item Four Vitae integer tempus condimentum. 19.99
    Item Five Ante turpis integer aliquet porttitor. 29.99
    100.00

    Buttons

    • Disabled
    • Disabled

    Form