Research
Projects before 2024 are either unpublished projects or thesis
2025
- eFlesh: Highly customizable Magnetic Touch Sensing using Cut-Cell MicrostructuresVenkatesh Pattabiraman, Zizhou Huang, Panozzo. Daniele, and 3 more authorsunder review, 2025
If human experience is any guide, operating effectively in unstructured environments—such as homes and offices—requires robots to sense the forces they apply during physical interaction. Yet, the lack of a versatile, accessible, and easily customizable tactile sensor has led to fragmented, sensor-specific solutions in general-purpose robotic manipulation—and in many cases, to force-unaware, sensorless approaches. With eFlesh, we aim to bridge this gap by introducing a magnetic tactile sensor that is low-cost, easy to fabricate, and highly customizable. Building an eFlesh sensor requires only four components: a hobbyist 3D printer, off-the-shelf magnets (costing less than $5), a simple CAD model of the desired shape, and a magnetometer circuit board. The sensor is constructed from tiled, parameterized cut-cell microstructures, which allow for tuning both the sensor’s geometry and its mechanical response. To support broad accessibility, we provide an open-source design tool that converts simple convex OBJ/STL files into 3D-printable STLs ready for fabrication. This modular design framework enables users to create application-specific sensors for robot hands, grippers, quadruped feet, and more, and to easily adjust sensitivity to meet the demands of different tasks. Our sensor characterization experiments demonstrate the precision of eFlesh: contact localization accuracy of 0.5 mm, with force prediction errors of 0.27 N along the z-axis and 0.12 N in the x/y-plane. We also present a learning-based slip detection model that generalizes to unseen objects with 95% accuracy, and visuotactile control policies that improve manipulation performance by 40% over vision-only baselines – achieving 90% success rate for a number of precise tasks like plug insertion and credit card swiping, that require sub-mm accuracy for successful completion. All design files, code, trained models, and the CAD-to-eFlesh STL conversion tool are openly available to promote accessibility and encourage widespread adoption.
- Learning Precise, Contact-Rich Manipulation through Uncalibrated Tactile SkinVenkatesh Pattabiraman*, Yifeng Cao, Siddhant Haldar, and 2 more authorsInternational Journal on Robotics Research (IJRR) - manuscript under preparation, 2025
While visuomotor policy learning has advanced robotic manipulation, precisely executing contact-rich tasks remains challenging due to the limitations of vision in reasoning about physical interactions. To address this, recent work has sought to integrate tactile sensing into policy learning. However, many existing approaches rely on optical tactile sensors that are either restricted to recognition tasks or require complex dimensionality reduction steps for policy learning. In this work, we explore learning policies with magnetic skin sensors, which are inherently low-dimensional, highly sensitive, and inexpensive to integrate with robotic platforms. To leverage these sensors effectively, we present the ViSk framework, a simple approach that uses a transformer-based policy and treats skin sensor data as additional tokens alongside visual information. Evaluated on four complex real-world tasks involving credit card swiping, plug insertion, USB insertion, and bookshelf retrieval, ViSk significantly outperforms both vision-only and optical tactile sensing based policies. Further analysis reveals that combining tactile and visual modalities enhances policy performance and spatial generalization, achieving an average improvement of 27.5% across tasks. Videos and more details can be found on https://visuoskin.github.io.
- Feel The Force: Contact-Driven Learning from HumansAdemi Adeniji*, Zhuoran Chen*, Vincent Liu, and 5 more authorsunder review, 2025
Robots often struggle with fine-grained force control in contact-rich manipulation tasks. While learning from human demonstrations offers a scalable solution, visual observations alone lack the fidelity needed to capture tactile intent. To bridge this gap, we propose Feel the Force (FTF): a framework that learns force-sensitive manipulation from human tactile demonstrations. FTF uses a low-cost tactile glove to measure contact forces and vision-based hand pose estimation to capture human demonstrations. These are used to train a closed-loop transformer policy that predicts robot end-effector trajectories and desired contact forces. At deployment, a PD controller modulates gripper closure to match the predicted forces, enabling precise and adaptive manipulation. FTF generalizes across diverse force-sensitive tasks, achieving a 77% success rate across five manipulation scenarios, and demonstrates robustness to test-time disturbances—highlighting the benefits of grounding robotic control in human tactile behavior.
2024
- AnySkin: Plug-and-play Skin Sensing for Robotic TouchRaunaq Bhirangi, Venkatesh Pattabiraman, Enes Erciyes, and 3 more authorsInternational Conference on Robotics and Automation (ICRA) - Published, 2024
While tactile sensing is widely accepted as an important and useful sensing modality, its use pales in comparison to other sensory modalities like vision and proprioception. AnySkin addresses the critical challenges that impede the use of tactile sensing – versatility, replaceability, and data reusability. Building on the simple design of ReSkin, and decoupling the sensing electronics from the sensing interface, AnySkin makes integration as straightforward as putting on a phone case and connecting a charger. Furthermore, AnySkin is the first uncalibrated tactile-sensor to report cross-instance generalizability of learned manipulation policies. To summarize, this work makes three key contributions: first, we introduce a streamlined fabrication process and a design tool for creating an adhesive-free, durable and easily replaceable magnetic tactile sensor; second, we characterize slip detection and policy learning with the AnySkin sensor; third, we demonstrate zero-shot generalization of models trained on one instance of AnySkin to new instances, and compare it with popular existing tactile solutions like DIGIT and ReSkin. Videos and more details can be found on https://anon-anyskin.github.io.
- Neural Circuit Architectural Priors for Quadruped LocomotionCOSYNE and NAISYS - Published, 2024
Learning-based approaches to quadruped locomotion commonly adopt generic policy architectures like fully connected MLPs. As such architectures contain few inductive biases, it is in practice common to incorporate priors in the form of rewards, training curricula, imitation data, or trajectory generators. In nature, animals are born with highly structured connectivity in their nervous systems shaped by evolution to provide useful architectural priors. For instance, a horse can walk within hours of birth and efficiently improve its ability with practice. In this work, we explore the advantages of a biologically inspired ANN architecture for quadruped locomotion. Our architecture achieves good innate performance and better final performance than MLPs, while using less data and orders of magnitude fewer parameters. Our architecture also exhibits better generalization to task variations, even admitting deployment on a physical robot without standard sim-to-real methods. Our work shows that neural circuits can provide valuable priors for locomotion and encourages future work in neural circuit architectural priors.
- Hierarchical State Space Models for Continuous Sequence-to-Sequence ModelingRaunaq Bhirangi, C Wang, Venkatesh Pattabiraman, and 4 more authorsInternational Conference on Machine Learning (ICML) - Published, 2024
Reasoning from sequences of raw sensory data is a ubiquitous problem across fields ranging from medical devices to robotics. These problems often involve using long sequences of raw sensor data (e.g. magnetometers, piezoresistors) to predict sequences of desirable physical quantities (e.g. force, inertial measurements).While classical approaches are powerful for locally-linear prediction problems, they often fall short when using real-world sensors. These sensors are typically non-linear, are affected by extraneous variables (e.g. vibration), and exhibit data-dependent drift. For many problems, the prediction task is exacerbated by small labeled datasets since obtaining ground-truth labels requires expensive equipment. In this work, we present Hierarchical State-Space Models (HiSS), a conceptually simple, new technique for continuous sequential prediction. HiSS stacks structured state-space models on top of each other to create a temporal hierarchy. Across six real-world sensor datasets, from tactile-based state prediction to accelerometer-based inertial measurement, HiSS outperforms prior sequence models such as causal transformers, LSTMs, S4, and Mamba by at least 23% on MSE. Our experiments further indicate that HiSS demonstrates efficient scaling to smaller datasets and is compatible with existing data-filtering techniques.
2022
- UNav: Vision-Based Navigation System for Blind/Low-Vision PeopleAnbang Yang, Venkatesh Pattabiraman, and Chen Feng2022
- Elastica: A Geometrically Nonlinear Elastic Model Flexible Robotic Arm Resin Printer - for Space ApplicationsPoornakanta Handral, Venkatesh Pattabiraman, and Ramsharan Rangarajan2022
- Design and Development of an SMA Actuated Humanoid (5- Fingered) Manipulator with Underwater OperatabilityVenkatesh Pattabiraman, Saurav Kambil, and I.A. Palani2022
2021
- Optimization of a Shape Memory Alloy (SMA) Actuated Marine Robotic FlapperS Jayachandran, Saurav Kambil, Venkatesh Pattabiraman, and 1 more author2021