Stanford IPRL Lab @StanfordIPRL
Stanford Interactive Perception and Robot Learning Lab directed by Jeannette Bohg @leto__jean. @StanfordAILab iprl.stanford.edu Stanford Joined May 2019-
Tweets66
-
Followers1K
-
Following38
-
Likes72
We want our robots to extrapolate from a few examples of a manipulation task to many variations. Embedding equivariance in both, our object representation and policy architecture allows our 🤖 to do just that.
We want our robots to extrapolate from a few examples of a manipulation task to many variations. Embedding equivariance in both, our object representation and policy architecture allows our 🤖 to do just that.
One policy learned from 22 different types of robots! @StanfordIPRL has contributed data to this massive new dataset that opens up a ton of new research questions. Thanks to @QuanVng, @KarlPertsch and the google team as well as to the 34 academic labs!
One policy learned from 22 different types of robots! @StanfordIPRL has contributed data to this massive new dataset that opens up a ton of new research questions. Thanks to @QuanVng, @KarlPertsch and the google team as well as to the 34 academic labs!
Recognizing symbols like "dish in dishwasher" or "cup on table" enables 🤖 task planning But how do we get data to train models for recognizing symbols? Introducing "Grounding Predicates through Actions" to automatically label human video datasets 🧵 sites.google.com/stanford.edu/g…
Want to grasp a hammer by the handle or a plush animal by its left arm? We call this semantic manipulation! We found that keypoints are a great representation to ground language and facilitate precise manipulation. Check out this thread for details on KITE! 🪁
Want to grasp a hammer by the handle or a plush animal by its left arm? We call this semantic manipulation! We found that keypoints are a great representation to ground language and facilitate precise manipulation. Check out this thread for details on KITE! 🪁
At #CVPR2023, we present CARTO - a model that reconstructs articulated objects from a single stereo image. This includes the object's 3D shape, 6D pose, size, joint type, and joint state. All this in a category agnostic fashion. carto.cs.uni-freiburg.de
How do you sequence learned skills for a manipulation task? Use STAP to plan with learned skills and maximize the expected success of each skill in the plan where success is encoded in Q-functions. 🐙 Code for STAP is now on github github.com/agiachris/STAP
Seeing TidyBot come together in my lab has been fun! And yes, our lab has been exceptionally clean 🧹 Many of you commented on the mobile platform which I'm so pleased about seeing it move! Let me tell you the story behind these platforms! 🧵 x.com/jimmyyhwu/stat…
Seeing TidyBot come together in my lab has been fun! And yes, our lab has been exceptionally clean 🧹 Many of you commented on the mobile platform which I'm so pleased about seeing it move! Let me tell you the story behind these platforms! 🧵 x.com/jimmyyhwu/stat…
When organizing a home, everyone has unique preferences for where things go. How can household robots learn your preferences from just a few examples? Introducing 𝗧𝗶𝗱𝘆𝗕𝗼𝘁: Personalized Robot Assistance with Large Language Models Project page: tidybot.cs.princeton.edu
TidyBot: Personalized Robot Assistance with Large Language Models approach enables fast adaptation and achieves 91.2% accuracy on unseen objects in our benchmark dataset. We also demonstrate our approach on a real-world mobile manipulator called TidyBot, which successfully puts…
The key insight in our work TidyBot: Summarization with LLMs is an effective way to achieve generalization in robotics from just a few example preferences. Check out this great thread by Jimmy Wu who is visiting @StanfordIPRL!
The key insight in our work TidyBot: Summarization with LLMs is an effective way to achieve generalization in robotics from just a few example preferences. Check out this great thread by Jimmy Wu who is visiting @StanfordIPRL!
Large Language Models promise to replace task planners in robotics. But how do we verify that these plans are correct - especially for tasks that require long-horizon reasoning? 👇Check out Kevin's 🧵 on text2motion!
Large Language Models promise to replace task planners in robotics. But how do we verify that these plans are correct - especially for tasks that require long-horizon reasoning? 👇Check out Kevin's 🧵 on text2motion!
3D multi-object tracking is a challenging problem, because it requires effective data association, track lifecycle management, false-positive elimination, and false-negative propagation. To address all 4 of these problems, we propose ShaSTA! 🧵 1/5 sites.google.com/view/shasta-3d…
Active Task Randomization (ATR) learns to create novel and feasible tasks for acquiring generalizable visuomotor skills. The learned skills can be composed to solve unseen sequential manipulation tasks in the real world. arxiv.org/abs/2211.06134 sites.google.com/view/active-ta…
People often use customized tools for a variety of manipulation tasks: 🥄🍴🥢🪛🪝🪓🛠️✂️🧵📎 We look at the problem of automatically designing customized tools for robots and leverage differentiable simulation and continual learning. Video & Paper: sites.google.com/stanford.edu/l… 🧵
How do you perform a manipulation task with a novel object that is heavily occluded? We propose a method for task-driven in-hand manipulation of unknown objects with tactile sensing. sites.google.com/stanford.edu/t… 🧵 1/n
So many ways to generate a sequence of robot skills to solve a task that requires long-horizon reasoning: LLMs, task planners, high-level policies, … But how do you ensure that each skill is executed such that the next skill can be successful at all? sites.google.com/stanford.edu/t…
To do daily chores, robots need to understand articulated objects. Sometimes a single picture of an object is deceiving. We propose a novel method that leverages temporal data to estimate the object articulation mechanism. tinyurl.com/ycyva37v 🧵 1/9
Our third tutorial speaker is Jeannette Bohg @leto__jean (@Stanford) We look forward to Jeanette's tutorial on "Representations and Representation Learning in Robotics" Speakers, CfP and more details for #CoRL2022 under corl2022.org
Thankful for a fruitful collaboration with TRI! Check out this post for a spotlight of one of our projects on multi-object tracking
Thankful for a fruitful collaboration with TRI! Check out this post for a spotlight of one of our projects on multi-object tracking
Animesh Garg @animesh_garg
21K Followers 1K Following Foundation Models for Generalizable Autonomy. Assistant Professor in AI Robotics @GeorgiaTech + @NvidiaAI. prev @Stanford @berkeley_ai @UofTCompSciDanfei Xu @danfei_xu
6K Followers 1K Following Assistant Prof. at Georgia Tech @ICatGT, researcher at @NVIDIAAI | Ph.D. @StanfordAILab | Making robots smarterDorsa Sadigh @DorsaSadigh
8K Followers 390 Following CS Faculty @Stanford, @StanfordAILab 20% Research scientist @GoogleDeepMind PhD and BS from @Berkeley_EECSJan Peters @Jan_R_Peters
4K Followers 422 Following #RobotLearning Professor (#MachineLearning #Robotics) at @ias_tudarmstadt of @TUDarmstadt (Part of @ELLISforEurope, @Hessian_AI and @DFKI)Ken Goldberg @Ken_Goldberg
16K Followers 8K Following artist, @UCBerkeley engineering prof, @AmbiRobotics chief scientist, robots, rockets, redwoods, rebels.Roberto @RobobertoMM
2K Followers 249 Following Assistant CS Professor at UT Austin. Former Stanford and TUBerlin. Researching at the intersection of vision, learning and robotics 🏳️🌈Georgia Chalvatzaki @GeorgiaChal
5K Followers 2K Following Professor @CS_TUDarmstadt, @hessian_AI, AI Emmy Noether @dfg_public, co-chair TC @MobileManip & co-chair WiE @ieeerasFei Xia @xf1280
6K Followers 694 Following Research Scientist at @GoogleDeepMind, Robot Learning, Computer Vision. PhD from @StanfordAILab @StanfordSVL, previously @Tsinghua_Uni. #AGI through EmbodimentSiddharth Karamcheti @siddkaramcheti
3K Followers 794 Following PhD student @stanfordnlp & @StanfordAILab. I like language, robots, and people. ML/Robotics Intern @ToyotaResearch.Hazel Liu @HoreaLiu
2 Followers 27 FollowingChristopher Ott @tradesre
152 Followers 675 Following Compass | Realtor | Architecture + Garden + Robot and AI Enthusiast | DRE #01984563 | @compassNathan Benaich @nathanbenaich
51K Followers 32K Following solo member of investment staff @airstreet, brewing ambition @airstreetcafe, next token predictor @airstreetpressEric @Eric79047903
4 Followers 69 FollowingSirui Chen @eric_srchen
89 Followers 176 Following PhD in Stanford CS, Prev Undergrad at HKU. Interested in roboticsSichao Liu @ErikLiuSe
40 Followers 289 FollowingAtharva Rao @AtharvaRao13
78 Followers 824 Following High school student interested in machine learning, thermodynamic computing, and robotics! My newsletter: https://t.co/95TZLrGRlhRitesh Sharma @riteshsharmacs
34 Followers 669 Following PostDoc @MissouriSandT, Past: PhD@UCMerced, @AmazonRobotics, @DolbyLabs, @PARCInc , @HPI, @PassurAero, @WolframResearch @OregonstateXianglin li @XIANGLINLI9
1 Followers 2K Followingtaylor314ce @koslBetU8ibTaLs
2 Followers 101 Followingtng @lowlipop03
0 Followers 151 FollowingSreesha S Kuruvadi @kssreesha
102 Followers 1K FollowingC Zhang @ChongZitaZhang
306 Followers 414 Following Roboticist. ETH Zurich. BEng: Tsinghua Univ. Simply recording myself. Against killer robots, against anyone who advocates or builds killer robots.Harine Ravichandiran @harine_ravi
12 Followers 102 Following Incoming PhD Student @uwcse | Currently at Overland AI, Formerly @Waymo & @StanfordTuc Nguyen-Van @NgTuc
15 Followers 102 Followingelon musk @musk334
11 Followers 188 FollowingJuntao Ren @JuntaoRen
53 Followers 206 FollowingParivesh Singh @PariveshSingh10
51 Followers 1K FollowingChengkai Wu @ChengkaiWu_
2 Followers 75 FollowingCristian Basoalto @CristianBasoal3
0 Followers 13 FollowingSu @BilgeSuuuu
1K Followers 1K FollowingHaoru Xue @HaoruXue
15 Followers 84 Following PhD @berkeley_ai | prev. MS Robotics @CMU_Robotics | Tech Lead of @AIRacingTech in @IndyAChallenge | Humanoids, Quadrupeds, Race Cars (Autonomous)Emily Steiner @easteine
0 Followers 14 FollowingRobotics&AI@MIRMI @TUM_MIRMI
919 Followers 527 Following Munich Institute of Robotics and Machine Intelligence (MIRMI) of @TU_Muenchen; Kübra Karacan (kk); Riddhiman Laha (rh); Andreas Schmitz (as) #Robotics #AIJanhavee Shinde @SJanhavee
58 Followers 2K FollowingShelvin Pauly @PaulyShelvin
82 Followers 127 FollowingsibaharaT2 @SibaharaT2
3K Followers 5K Following blog → Twitter ( Link from blog to Twitter ) Artificial Intelligence ( AI ) URL:https://t.co/JaV9qUrSAiYixuan Huang @YixuanHuang13
61 Followers 95 Following Robotics Ph.D. student at the Unversity of Utah. Visiting Student Researcher at Stanford University.Narek Harutyunyan @NarekHaruty
12 Followers 45 FollowingQ.Zhang @QiZhangRobotics
15 Followers 109 Following신성호 @sinseongho4
6 Followers 212 Followingابتسام @samlvy
735 Followers 807 Following Senior AI student @UJCCSE |@KAUST_Academy Alumna | Interested in #GenAI and #TrustWorthyAI | Dream big, Work hard and Make it happen🦾Bardienus Duisterhof @BDuisterhof
319 Followers 543 Following PhD Student @CMU_Robotics, interested in perception and manipulation 📷🦾 Advisor: Jeffrey Ichnowski @jeff_ichnowskiCallie Merrick @ckmerrick6
252 Followers 525 Following Recently completed a PhD in Somatosensory Neuroscience focussing on wetness perception at the @thermosenselab of @lborouniversity.Укрась прощ.. @Mac_Jei
16 Followers 748 Following «Nisi solis nobis scripsimus» / Пишем только для нас самих. 🇷🇺 🇬🇪 🇦🇿 🇹🇷 🇺🇸Yvonne @augenstern827
3 Followers 54 FollowingChuanming @ChuanmingLiu
229 Followers 4K Following Ex-PhD student and alumni @sjtu1896 . Global citizen. Bootstrapping silicon-based life.Karol Hausman @hausman_k
22K Followers 141 Following @Physical_int ex: researcher @GoogleAI/@DeepMind, adj. Prof. @Stanford. Into robots, AI, NBA, philosophy, soccer and almond croissants. 🇵🇱🇺🇸Animesh Garg @animesh_garg
21K Followers 1K Following Foundation Models for Generalizable Autonomy. Assistant Professor in AI Robotics @GeorgiaTech + @NvidiaAI. prev @Stanford @berkeley_ai @UofTCompSciYuke Zhu @yukez
15K Followers 464 Following Assistant Professor @UTCompSci | Co-Leading GEAR @NVIDIAAI | CS PhD @Stanford | Building generalist robot autonomy in the wild | Opinions are my ownDorsa Sadigh @DorsaSadigh
8K Followers 390 Following CS Faculty @Stanford, @StanfordAILab 20% Research scientist @GoogleDeepMind PhD and BS from @Berkeley_EECSAnca Dragan @ancadianadragan
8K Followers 178 Following AI safety & alignment at Google DeepMind • associate professor at UC Berkeley EECS • proud mom of an amazing 2yr oldQuan Vuong @QuanVng
2K Followers 234 Following Robotics Research at @Physical_int, ex-@GoogleDeepMind Perpetually trying to find a quiet place to read.Jimmy Wu @jimmyyhwu
511 Followers 121 Following CS PhD student @Princeton. Robot learning and computer vision.Priya Sundaresan @priyasun_
734 Followers 340 Following CS PhD student @Stanford, prev. BS/MS @Berkeley_EECS | learning from humans and teaching robots 🤖Negin Heravi @NeginHeravi
111 Followers 118 Following PhD Candidate @StanfordIPRL. Mechanical engineering alum of @MIT.Rika Antonova @contactrika
2K Followers 765 Following Incoming @Cambridge_Uni faculty in ML/AI + Robotics | @Stanford Postdoc, @kth_rpl PhD, @CMU_Robotics MS, earlier @Google engineer (search, OCR+Books/StreetView)KTH Robotics, Percept.. @kth_rpl
740 Followers 136 Following KTH's Robotics, Perception and Learning lab, performing research in robotics, computer vision, and machine learning. #AI #ml #robotics #computervisionStanford HAI @StanfordHAI
86K Followers 559 Following The official account of the @Stanford Institute for Human-Centered AI, advancing AI research, education, policy, and practice to improve the human condition.Ajay Mandlekar @AjayMandlekar
2K Followers 334 Following NVIDIA AI Research Scientist | EE PhD @Stanford | Teaching 🤖 to imitate humans.Jeff Dean (@🏡) @JeffDean
296K Followers 6K Following Chief Scientist, Google DeepMind and Google Research. Co-designer/implementor of things like @TensorFlow, MapReduce, Bigtable, Spanner, Gemini .. (he/him)Lin Shao @linshaonju
2K Followers 3K Following Assistant Professor in Robotics @NUS | Ph.D. @Stanford | Opinions are my ownLashonda Eagels, MSW,.. @LREagels
52 Followers 607 Following I live to make an impact! #Influencer #Speaker #Entrepreneur #Clinician #Notary #MSW #WeddingOfficiant #Consultant #SchoolSocialWorker #PPSCSISL @SISLaboratory
889 Followers 115 Following Stanford Intelligent Systems Laboratory, Aeronautics & Astronautics Department. Advancing Research on Autonomous Systems and Decision Making Under Uncertainty.Marco Pavone @drmapavone
3K Followers 64 Following Prof @Stanford, Distinguished Research Scientist and AV research lead @nvidia. PhD from @MITAeroAstro. Robotics, autonomous systems, AI. Opinions are my own.Stanford ASL @StanfordASL
987 Followers 34 Following The Autonomous Systems Lab (ASL) develops methodologies for the analysis, design, and control of autonomous systems. @StanfordJuan Carlos Niebles @jcniebles
5K Followers 417 Following Computer Vision. Research Director @Salesforce @SFResearch, Adjunct Professor @Stanford @StanfordSVL.Stefan Schaal @stefanschaal
66 Followers 8 FollowingStanford AI Lab @StanfordAILab
137K Followers 318 Following The Stanford Artificial Intelligence Laboratory (SAIL), a leading #AI lab since 1963. ⛵️🤖 Emmy-winning video: https://t.co/lV9smZTC1mChristopher Manning @chrmanning
127K Followers 116 Following Director, @StanfordAILab. Assoc. Director, @StanfordHAI. Founder, @stanfordnlp. Prof. CS & Linguistics, @Stanford. IP @aixventureshq. 🇦🇺 Do #NLProc & #AI. 👋Stefano Ermon @StefanoErmon
13K Followers 362 Following Associate Professor of #computerscience @Stanford #AI #ML #SustainabilityChelsea Finn @chelseabfinn
69K Followers 384 Following Asst Prof of CS & EE @Stanford. PhD from @Berkeley_EECS, EECS BS from @MITWolfram Burgard @wolfram_burgard
2K Followers 205 FollowingSilvio Savarese @silviocinguetta
9K Followers 43 Following Executive Vice President, Chief Scientist @salesforce. Adjunct Professor of Computer Science @Stanford University. Faculty co-director @StanfordSVL. #AIFei-Fei Li @drfeifei
456K Followers 1K Following Prof (CS @Stanford), Co-Director @StanfordHAI, CoFounder/Chair @ai4allorg, Researcher #AI #computervision #ML AI+healthcareStanford University @Stanford
1.0M Followers 603 Following One of the world's leading research and teaching institutions. Official account of Stanford University.Alina @Alina93871811
1 Followers 68 FollowingFábio Ferreira @artificialfabio
281 Followers 278 Following Deep Learning PhD student @AutoML_org sup. by @FrankRHutter. CS MSc @kitkarlsruhe. Ex @StanfordAILab & German National Academic Foundation. I also powerlift.Michelle Lee @michellearning
3K Followers 944 Following PhD Candidate @StanfordAILab. Michelle Learning model currently training on robotics, AI, and wholesome dad jokes. ChemE🧪→ MechE 🔨→ AI research🤖Jeannette Bohg @leto__jean
7K Followers 490 Following Assistant Professor @StanfordAILab @StanfordIPRL. Perception, learning and control for autonomous robotic manipulation #BlackLivesMatter she/her 🌈How can a robot learn from a few human demonstrations and generalize to variations of the task? Equivariance is all you need! ⬇️ Check out our EquivAct 😺 Cute cat video included
From a few examples of solving a task, humans can: 🚀 easily generalize to unseen appearance, scale, pose 🎈 handle rigid, articulated, soft objects 0️⃣ all that with zero-shot transfer. Introducing EquivAct to help robots gain these capabilities. 🔗 equivact.github.io 🧵↓
From a few examples of solving a task, humans can: 🚀 easily generalize to unseen appearance, scale, pose 🎈 handle rigid, articulated, soft objects 0️⃣ all that with zero-shot transfer. Introducing EquivAct to help robots gain these capabilities. 🔗 equivact.github.io 🧵↓
Incorporating equivariance into representation learning and policy architectures to help robots gain better generalization capabilities👇
From a few examples of solving a task, humans can: 🚀 easily generalize to unseen appearance, scale, pose 🎈 handle rigid, articulated, soft objects 0️⃣ all that with zero-shot transfer. Introducing EquivAct to help robots gain these capabilities. 🔗 equivact.github.io 🧵↓
How do you autonomously learn a library of composable, visuomotor skills? At #IROS2023 we present an approach that can create novel yet feasible tasks to gradually train the skill policies on harder and harder tasks. Monday morning Poster Session: MoAIP-19.9
Active Task Randomization (ATR) learns to create novel and feasible tasks for acquiring generalizable visuomotor skills. The learned skills can be composed to solve unseen sequential manipulation tasks in the real world. arxiv.org/abs/2211.06134 sites.google.com/view/active-ta…
Key Insight of TidyBot: Summarization with LLMs is an effective way to achieve generalization in robotics from just a few example preferences. Jimmy presents TidyBot on Monday afternoon at #IROS2023: MoBIP-16.5 Come by to chat!
When organizing a home, everyone has unique preferences for where things go. How can household robots learn your preferences from just a few examples? Introducing 𝗧𝗶𝗱𝘆𝗕𝗼𝘁: Personalized Robot Assistance with Large Language Models Project page: tidybot.cs.princeton.edu
Not many people know this but RT2 and our robots, were on the front page of the @nytimes! Just got my personal copy. Go read the paper if you haven’t yet: robotics-transformer2.github.io/assets/rt2.pdf
Grounding language via keypoints for precise, semantic manipulation! Check out our new work KITE that @priyasun_ explains in this thread 👇
Introducing KITE🪁: Keypoints + Instructions to Execution! We introduce a framework for semantic manipulation using keypoints as a representation for visual grounding and precise action inference. Learn more: tinyurl.com/kite-site (1/9) 🧵⬇️
Very excited about this first work coming out of my lab, @RobInRoboticsUT, for RSS23! CausalMoMa makes RL policy learning for whole-body MoMa easier by stabilizing backpropagation gradients. And the policies transfer 0-shot to the real world! Great work by @JiahengHu1
Training sensorimotor Mobile Manipulation policies with RL is hard: many objectives (non-collision, reach…) and high DoF (arm, base, head…) CausalMoMa (shorturl.at/gjIUW, RSS23) makes it possible by leveraging causal dependencies between action dims and rewards.
Researchers at @EPrinceton have successfully deployed a large language model to help a robotic manipulator make sense of instructions to tidy up a room. ⬇️ (via @IntEngineering) bit.ly/41r4QsJ
To make personalization easier, we can ask large language models to infer general patterns from brief text-based instructions that express user preferences. Check out Jimmy's thread on the TidyBot project to see how this idea can be brought to life with a real robot.
When organizing a home, everyone has unique preferences for where things go. How can household robots learn your preferences from just a few examples? Introducing 𝗧𝗶𝗱𝘆𝗕𝗼𝘁: Personalized Robot Assistance with Large Language Models Project page: tidybot.cs.princeton.edu
When organizing a home, everyone has unique preferences for where things go. How can household robots learn your preferences from just a few examples? Introducing 𝗧𝗶𝗱𝘆𝗕𝗼𝘁: Personalized Robot Assistance with Large Language Models Project page: tidybot.cs.princeton.edu
@chris_j_paxton P.S. It would be cumbersome to hard-code *all* low-level skills one needs for everyday life. So, not worried about robot learning being dead. It did it feel different to be a part of the work with learning not as centerpiece, but fun to focus again on the personalization aspects
TidyBot from a team at princeton. - ViLD + CLIP + LLMs for object detection and reasoning - Engineered low-level skills for the robot Yet another paper making me wonder "is robot learning actually dead though?" seems like classic robotics + vision/language models are all you need
TidyBot: Personalized Robot Assistance with Large Language Models approach enables fast adaptation and achieves 91.2% accuracy on unseen objects in our benchmark dataset. We also demonstrate our approach on a real-world mobile manipulator called TidyBot, which successfully puts…
TidyBot: Personalized Robot Assistance with Large Language Models approach enables fast adaptation and achieves 91.2% accuracy on unseen objects in our benchmark dataset. We also demonstrate our approach on a real-world mobile manipulator called TidyBot, which successfully puts…
pros: the lab is clean after running experiments :-) cons: the lab is cleaner than my apartment, and I can't take this TidyBot home... (yet)
TidyBot: Personalized Robot Assistance with Large Language Models approach enables fast adaptation and achieves 91.2% accuracy on unseen objects in our benchmark dataset. We also demonstrate our approach on a real-world mobile manipulator called TidyBot, which successfully puts…
Excited to share Text2Motion - a framework that leverages large language models to solve sequential manipulation tasks requiring complex, long-horizon reasoning. A great collaboration with @leto__jean 's lab. arxiv.org/pdf/2303.12153… @StanfordEng
Large language models (LLMs) can readily convert language instructions into high-level plans. However, should we trust robots to execute these plans without verifying that they actually satisfy the instructions and are feasible in the real world? sites.google.com/stanford.edu/t…
Announcing new work, Text2Motion! We explore how Large Language Models (LLMs) and learned robot skills can be used to solve TAMP-like tasks with two key considerations: 1. Plans should satisfy user instructions, 2. Plans should be feasible for execution. Check out Text2Motion!
Large language models (LLMs) can readily convert language instructions into high-level plans. However, should we trust robots to execute these plans without verifying that they actually satisfy the instructions and are feasible in the real world? sites.google.com/stanford.edu/t…
How did we come up with the tools that we use in our everyday lives? We definitely did not design tools like forks or spoons on our first attempt! Here is a picture of a vintage utensil that is used for pushing food into a spoon! So can we automatically design this?
We build an end-to-end framework for learning tool morphology for contact-rich tasks leveraging differentiable simulators. We optimize a tool over a distribution of task variations but training over the entire distribution is expensive and the optimization landscape is complex.
People often use customized tools for a variety of manipulation tasks: 🥄🍴🥢🪛🪝🪓🛠️✂️🧵📎 We look at the problem of automatically designing customized tools for robots and leverage differentiable simulation and continual learning. Video & Paper: sites.google.com/stanford.edu/l… 🧵
DeXtreme is our new work on scaling sim-to-real for contact-rich manipulation with a vision-based state estimation on a robot hand with the infrastructure we have been developing with Isaac Gym over the past one year. arxiv.org/abs/2210.13702 dextreme.org