AI智能总结
Peter SchaldenbrandOctober 10, 2025CMU-RI-TR-25-101 The Robotics InstituteSchool of Computer ScienceCarnegie Mellon UniversityPittsburgh, Pennsylvania Thesis Committee: Prof. Jean Oh,chairProf. James McCannProf. Manuela VelosoProf. Ken Goldberg (University ofCalifornia, Berkeley) For the degree of Doctor of Philosophy in Robotics. Copyright©2025 Peter Schaldenbrand. All rights reserved. Abstract Robot automation is generally welcomed for tasks that are dirty, dull, or dangerous,but with expanding robotic capabilities, robots are entering domains that are safeand enjoyable, such as creative industries. Although there is a widespread rejectionof automation in creative fields, many people, from amateurs to professionals, wouldwelcome supportive or collaborative creative tools.Supporting creative tasks ischallenging with real-world robotics because there are limited relevant datasets, creativetasks are abstract and high-level, and real-world tools and materials are difficult tomodel and predict. Learning-based robotic intelligence is a promising method forcreative support tools, but since the task is so complex, common approaches suchas learning from demonstration would require too many samples and reinforcementlearning may never converge.In this thesis, we introduce several self-supervisedlearning techniques to enable a robot to teach itself to support humans in the act ofcreativity.We formalize robots that support people in the making of things from high- level goals in the real world as a new field, Generative Robotics. We introduce anapproach for supporting 2D visual art-making with paintings and drawings along with3D clay sculpture from a fixed perspective. Because there are no robotic datasetsfor collaborative painting and sculpting, we designed our approach to learn fromsmall, self-generated datasets to learn real-world constraints and support collaborativeinteractions. This thesis contributes (1) a Real2Sim2Real technique that enables arobot to create complex dynamics models from small, self-generated datasets of actions,(2) a method for planning robotic actions for long-horizon tasks in a semanticallyaligned representation, and (3) a self-supervised learning framework to adapt pretrainedmodels to be compatible with robots and produce collaborative goals. We show howself-supervised learning can enable model-based robot planning approaches to paintcollaboratively with humans using various painting mediums. Lastly, we generalize ourapproach from the painting to the sculpting domain, demonstrating that our approachgeneralizes to new materials, tools, action representations, and state representations. Dedicated to my family — Mum, Dad, Heinz, Liz, and Seamus —the great gust of wind beneath my tattered wings. Contents v List of Tables IIntroduction 1 1Introduction2 1.1Generative Robotics. . . . . . . . . . . . . . . . . . . . . . . . . . .21.1.1Real World Constraints . . . . . . . . . . . . . . . . . . . . . .31.1.2High Level Goals. . . . . . . . . . . . . . . . . . . . . . . . .41.1.3Supportive, Human-Robot Co-Creativity . . . . . . . . . . . .41.2Creativity and Art. . . . . . . . . . . . . . . . . . . . . . . . . . . .51.3Learning-Based Robot Intelligence . . . . . . . . . . . . . . . . . . . .61.4Thesis Statement. . . . . . . . . . . . . . . . . . . . . . . . . . . . .71.5Intellectual Merit and Contributions. . . . . . . . . . . . . . . . . .7 2Background2.1Related Work: Generative Robotics . . . . . . . . . . . . . . . . . . . 9 92.1.1Real-World Robotics. . . . . . . . . . . . . . . . . . . . . . .92.1.2Generative AI . . . . . . . . . . . . . . . . . . . . . . . . . . .102.1.3Generative Robotics. . . . . . . . . . . . . . . . . . . . . . .102.2Related Work: Robot Learning for Making Things . . . . . . . . . . .112.2.1Imitation Learning & Learning from Demonstration . . . . . .112.2.2Reinforcement Learning. . . . . . . . . . . . . . . . . . . . .122.2.3Model Predictive Control . . . . . . . . . . . . . . . . . . . . .13 3Overview3.1Generalized Approach to Generative Robotics. . . . . . . . . . . . . 14 143.2FRIDA Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 4Dynamics Model and Semantic Planning for Robot Painting4.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17174.2Related Work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .184.2.1Simulated Painting. . . . . . . . . . . . . . . . . . . . . . . .184.2.2Robot Painting. . . . . . . . . . . . . . . . . . . . . . . . . .184.2.3Brush Stroke Modeling . . . . . . . . . . . . . . . . . . . . . .204.3Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204.3.1Brush Stroke Action Parameters . . . . . . . . . . . . . . . . .204.3.2Real Data to Simulation. . . . . . . . . . . . . . . . . . . . .214.3.3Differentiable Simulated Painting Environment . . . . . . . . .214.3.4Objective Functions . . . . . . . . . . . . . . . . . . . . . . . .224.3.5Planning Algor