@timschneider94.bsky.social
32 followers 7 following 27 posts
Posts Media Videos Starter Packs
timschneider94.bsky.social
One more thing — if you are looking to replace the Franka gripper with something more real-time friendly, we've got you covered:
Our dynamixel-api package lets you control any Dynamixel-based gripper directly from Python.
🔗 github.com/TimSchneider...
Special thanks to Erik Helmut!
GitHub - TimSchneider42/dynamixel-api: Easy-to-use Python API for DYNAMIXEL motors and DYNAMIXEL-based grippers.
Easy-to-use Python API for DYNAMIXEL motors and DYNAMIXEL-based grippers. - TimSchneider42/dynamixel-api
github.com
timschneider94.bsky.social
Also, franky lets you access some of the functionality accessible only on the web interface, such as enabling FCI and unlocking the brakes from Python!
But please don't tell Franka Robotics 🤫, because using their API like that is probably illegal.
timschneider94.bsky.social
But wait, there is more!
franky exposes most libfranka functionality in its Python API:
🔧 Redefine end-effector properties
⚖️ Tune joint impedance
🛑 Set force/torque thresholds
…and much more!
timschneider94.bsky.social
Here’s how simple robot control looks with franky 👇

No ROS nodes. No launch files. Just five lines of Python.
timschneider94.bsky.social
🔧 Installation = 3 simple steps:

1️⃣ Install a real-time kernel
2️⃣ Grant real-time permissions to your user
3️⃣ pip install franky-control

…and you’re ready to control your Franka robot!
timschneider94.bsky.social
franky supports position & velocity control in both joint and task space — plus gripper control, contact reactions, and more! 🤖
With franky, you get real-time control both in C++ & Python: commands are fully preemptible, and Ruckig replans smooth trajectories on the fly.
Reposted
nicobohlinger.bsky.social
Robot Randomization is fun!
timschneider94.bsky.social
I would like to thank the first author of this work, Janis Lenz, and my collaborators, Theo Gruner, @daniel-palenicek.bsky.social, Inga Pfenning, and @jan-peters.bsky.social, for this amazing work!
timschneider94.bsky.social
5️⃣ In conclusion, we find that complementing vision with tactile sensing helps to train more robust policies under more challenging settings. In the future we plan to extend our extend our analysis to even more challenging tasks, such as screw or lightbulb insertion.
timschneider94.bsky.social
4️⃣ We also find that even for wider holes, the resulting vision-only policy is significantly less robust to changes in the environment (different hole sizes or angles) when tested zero-shot style. In contrast, the vision-tactile policy is robust even under unseen conditions.
timschneider94.bsky.social
3️⃣ Results?
Turns out, as long as the hole is wide enough, a vision-only agent learns to solve the task just as well as a vision-tactile agent. However, once we make the hole tighter, we see that the vision-only agent fails to solve the task and gets stuck in a local minimum.
timschneider94.bsky.social
2️⃣ What did we do?
We built a real, fully autonomous, and self-resetting tactile insertion setup and trained model-based RL directly in the real world. Using this setup, we run extensive experiments to understand the role of vision and touch in this task.
timschneider94.bsky.social
1️⃣ Robotic insertion in the real world is still a challenging task. Humans use a combination of vision and touch to exhibit dexterous behavior in the face of uncertainty. We wanted to know: What role do vision and touch play when RL agents learn to solve real-world insertion?
timschneider94.bsky.social
Stoked to present another work at RLDM 2025! If you’re into dexterous robotics, multimodal RL, or tactile sensing, swing by Poster 100 today to see what we cooked up 🦾✨
#Robotics #TactileSensing #RL #DexterousManipulation @ias_tudarmstadt

🧵
timschneider94.bsky.social
Big thanks to my collaborators Cristiana de Farias, Roberto Calandra, Liming Chen, and @jan-peters.bsky.social!
timschneider94.bsky.social
9️⃣ Come chat with us!
Interested in active perception, transformers, or tactile robotics? Stop by poster 105 at RLDM this afternoon and let’s connect!
🗓️ 16:30 - 19:30
📍 Poster 105

Paper preprint: arxiv.org/pdf/2505.06182
TactileMNIST benchmark: sites.google.com/robot-learni...
arxiv.org
timschneider94.bsky.social
8️⃣ Limitations & Future Directions:
Like all deep RL, TAP needs a lot of data. Next steps:
- Improve sample efficiency (think: pre-trained models)
- Apply TAP on real robots (sim2real transfer)
- Scale up to multi-finger/multi-modal (vision+touch) perception
timschneider94.bsky.social
7️⃣ What about baselines?
TAP outperformed both random and prior state-of-the-art (HAM) baselines, highlighting the value of attention-based models and off-policy RL for tactile exploration.
timschneider94.bsky.social
6️⃣ We observe that TAP learns reasonable strategies. E.g., when estimating the pose of a wrench, TAP first scans the surface to find the handle and then moves towards one of the ends to determine pose and orientation.
timschneider94.bsky.social
5️⃣ Key Experiments:
We tested TAP on a variety of ap_gym (github.com/TimSchneider...) tasks from the TactileMNIST benchmark (sites.google.com/robot-learni...).
In all cases, TAP learns to actively explore & infer object properties efficiently.
timschneider94.bsky.social
4️⃣ How Does TAP Work?
TAP jointly learns action and prediction with a shared transformer encoder using a combination of RL and supervised learning. We show that TAP's formulation arises naturally when optimizing a supervised learning objective w.r.t action and prediction.
timschneider94.bsky.social
3️⃣ Introducing TAP:
We propose TAP (Task-agnostic Active Perception) — a novel method that combines RL and transformer models for tactile exploration. Unlike previous methods, TAP is completely task-agnostic, i.e., it can learn to solve a variety of active perception problems.