A resolution to the Cornell University Virtual Reality research competition was achieved utilizing our site's artificial intelligence animation technology.
In an exciting development, a research project at Cornell University is leveraging the power of a cutting-edge AI platform to revolutionise the creation of robotic assistants for scuba divers. The project, a collaboration between the Virtual Embodiment Lab (VEL) and the Lab for Integrated Sensor Control (LISC) at Cornell University, is set to be presented at the IEEE International Conference on Robot and Human Interactive Communication (IEEE RO-MAN 2024).
The innovative AI platform, known as the UnRealTHASC - A Cyber-Physical XR Testbed for Underwater Real-Time Human Autonomous Systems Collaboration, plays a pivotal role in this groundbreaking research. This AI system, accessible through a dedicated website, has been instrumental in expanding the data repository of the research team, potentially using existing video datasets of scuba divers.
One of the key advantages of this AI platform is its speed and cost-effectiveness. Setup is 7 times faster than alternative software tools, and it boasts a 64% lower cost compared to traditional motion capture tools. The platform's FBX file export feature has been particularly beneficial, allowing the Cornell University research team to take animations into Unity, where they could extract key motion metrics.
The AI Video to Animation tool within the platform has been a key software tool that has strengthened the output of the hydrodynamic motion model. This model, a central focus of the project, aims to help with the development of robot buddies for scuba divers. The researchers used the platform's video editor to add camera angles, visualising animations from multiple perspectives.
The AI system's video reconstruction data was compared with the motion capture data from the original dataset by the Cornell University research team to ensure accuracy. The animation outputs were easily interoperable with additional software tools, making it simple for the team to incorporate the AI platform into their research workflow.
The AI platform's integration of real-world underwater motion sensing with virtual embodiment modeling and AI processing is transforming the development of adaptive robotic assistants capable of interacting effectively with scuba divers in dynamic underwater contexts. By enhancing situational awareness, agility, and responsiveness tailored to the unique underwater environment, these robotic assistants are poised to significantly improve diver safety and operational efficiency.
Cornell University's Virtual Embodiment (Virtual Reality) lab, led by Professor Andrea Won, whose research focuses on immersive media and human perception, is at the helm of this innovative project. The project authors include Sushrut Surve, Jia Guo, Jovan Menezes, Connor Tate, Yiting Jin, Justin Walker, Silvia Ferrari, and Andrea Stevenson Won.
A significant contribution to the project was made by Jiahao Liu, a researcher from Cornell University's VR lab, who created a script to connect motion capture rigs with the FBX files from the website AI. Despite the lack of specific details or published results about this collaboration or AI platform in the provided search results, the potential benefits and capabilities of this integration are clear.
This integration represents a significant step forward in the development of underwater robotic assistants, combining real-world underwater motion sensing with virtual embodiment modeling and AI processing to create adaptive robotic systems that can interact effectively with scuba divers in dynamic underwater contexts.
- The collaboration between Cornell University's Virtual Embodiment Lab and the Lab for Integrated Sensor Control is leveraging an AI platform called UnRealTHASC for the creation of robotic assistants for scuba divers, a project to be presented at the IEEE RO-MAN 2024 conference.
- The UnRealTHASC platform, accessible through a dedicated website, has expanded the data repository of the research team, potentially using video datasets of scuba divers.
- This AI system is faster and more cost-effective, set up being 7 times faster than alternative software tools, and boasting a 64% lower cost compared to traditional motion capture tools.
- The platform's FBX file export feature allows animations to be taken into Unity, where key motion metrics can be extracted.
- The AI Video to Animation tool within the platform has strengthened the output of the hydrodynamic motion model, a central focus of the project aimed at helping develop robot buddies for scuba divers.
- The researchers used the platform's video editor to add camera angles, visualizing animations from multiple perspectives.
- The AI system's video reconstruction data was compared with the motion capture data from the original dataset to ensure accuracy.
- The animation outputs are easily interoperable with additional software tools, making it simple for the team to incorporate the AI platform into their research workflow.
- The integration of real-world underwater motion sensing with virtual embodiment modeling and AI processing is transforming the development of adaptive robotic assistants, enhancing situational awareness, agility, and responsiveness tailored to the unique underwater environment.
- Cornell University's Virtual Embodiment Lab, led by Professor Andrea Won, whose research focuses on immersive media and human perception, is at the helm of this innovative project.
- Jiahao Liu, a researcher from Cornell University's VR lab, contributed to the project by creating a script to connect motion capture rigs with the FBX files from the website AI, potentially offering significant benefits and capabilities in the development of underwater robotic assistants.