Robotic Farming of the Future

The University of Sydney’s Australian Centre for Field Robotics are pioneers when it comes to robotic farming. Having developed a series of driverless tractors, they give us a sneak peek of how future farms and orchards will operate in the era of mass automation. For more videos, subscribe to Mashable News: Visit Mashable on the web: Give us a follow: Facebook: Twitter: Instagram:

How long until robots rule the world?

Ben Goertzel joins Sophia The Robot and Han The Robot on stage! Robot intelligence becomes more impressive with each passing year, but when will AI actually surpass humans? Wish you were here? Sign up for 2 for 1 discount code for #WebSummit 2019 now: hello hello everyone it's a pleasure to be to be back here with Sofia once again and with Sofia's brother who we usually keep locked in the basement back in Hong Kong he doesn't you know he doesn't get out that much so I've got I've got a lot of stuff to show and and tell you this this time I'm actually I'm here wearing two hats well one physical hat and two virtual hat so I'm I'm here as the chief scientist of Hanson robotics I've led the software team creating the brains of these amazing robots and also I'm here as the CEO of singularity nough the blockchain based AI platform which we're building and which is helping to supply these robots with some of their intelligence so we're gonna have some exciting announcements from singularity net coming up here and also I'm going to show you some of the upgrades to these robots…

MIT Robot Learns How to Play Jenga

Using machine-learning and sensory hardware, Alberto Rodriguez, assistant professor of mechanical engineering, and members of MIT’s MCube lab have developed a robot that is learning how to play the game Jenga®. The technology could be used in robots for manufacturing assembly lines. Jenga is a complex game requiring a steady handand nerves of steel. As humans, we combineour senses of sights and tough to master this game. Now, researchersat MIT's MCube Lab have devised an algorithmto replicate this ability using a robot. Unlike typicalmachine-learning methods that rely on huge data sets todecide their next best action, this robot learns anduses a hierarchical model that enables gentle andaccurate extraction of pieces. This model allows the robot toestimate the state of a piece, simulates possible moves, anddecide on a favorable one. It divides thepossible interactions between the robotand the Jenga tower into clusters, each withits own set of physics. The robot efficientlyand clearly identifies when a piece feels stuckor free and decides how to extract itusing far less data. This approach as asuccessful example of AI moving into the physical world. The robot learns as itinteracts with its environment and captures some of theessential skills that define human manipulation.

Robot’s Delight – Japanese robots rap about their Artificial Intelligence

“Robot’s Delight – A Lyrical Exposition on Learning by Imitation from Human-Human Interaction” This was our video submission which won Best Video at the 2017 ACM/IEEE International Conference on Human Robot Interaction (HRI 2017). Authors: Dylan F. Glas, Malcolm Doering, Phoebe Liu, Takayuki Kanda, Hiroshi Ishiguro Lyrics and music production: Dylan F. Glas Camerawork and video editing: Malcolm Doering Machine learning and robot control software: Phoebe Liu Thanks to Jason Lim for his help with robot choreography. Select video clips (c) 2016, IEEE. Reused, with permission. Extended abstract: For details about the learning-by-imitation research, see our 2016 IEEE Transactions on Robotics paper: Phoebe Liu, Dylan F. Glas, Takayuki Kanda, and Hiroshi Ishiguro, “Data-Driven HRI: Learning Social Behaviors by Example from Human-Human Interaction”, in IEEE Transactions on Robotics, Vol. 32, No. 4, pp. 988-1008, 2016. Author’s preprint: This research was supported by the JST ERATO Ishiguro Symbiotic Human-Robot Interaction Project. Hoggett xiety directly to the hip hip hop you don't arrive to the bank I need a up jump the disease to the rhythm of the TV I am roll those and I'd like to take it alone I got the quanta canister the gears in the wheel and the loader…

Donald AI Trump Demonstrates Fun New Facial Manipulation Tech (ICface)

Use my models to make Trump/Obama/Clinton/Sanders say anything: Make a realistic Trump deepfake video like at (for comedic purposes only): Join Coding Elite (get early access to new videos and projects of mine, and request for my models to try a new text/song): Donald Trump’s voice was created with my own Trump TTS model, trained using an implementation of the papers “Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis” ( and “Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron” ( Given a reference audio, this model can generate speech in that style. In this case I have made the voice far calmer by using a different reference audio (compare this voice to others on my channel where Trump tries to rap). ICface paper: ICface github (includes trained models): For a more impressive example of manipulating facial expressions, see (done using my own method of transferring facial expressions). This requires vastly more effort than the ICface example here though. Also see ‘Deep Video Portraits.’ he goes it's Donald Trump recently I have taken a break from singing and found this project called Ozzy's face which given a single photo of someone is able to transfer…