Quantcast
Channel: 100% Solutions: robotics
Viewing all 3882 articles
Browse latest View live

Make your own 3D-printed sonic tractor beam with Arduino

$
0
0

From magic to science, man has long dreamed about being able to manipulate objects from a distance. People have been able to push something using air or even sound waves for a while, but University of Bristol researcher Asier Marzo and colleagues have come up with a 3D-printable device that can not only repel small items, but can also attract them to the source. It does this using an array of sound transducers arranged in a dome shape at the end of a wand. The acoustic tractor beam is also equipped with an Arduino Nano, a motor controller board, a DC-DC converter, and a LiPo battery, among some other easily accessible components. Basically, an Arduino will generate 4 half-square signals at 5Vpp 40kHz with different phases. These signals get amplified to 25Vpp by the motor driver and fed into the transducers. A button pad can be used to change the phases so that the particle moves up and down. A battery (7.3V) powers the Arduino and the logic part of the motor driver. A DC-DC converter steps-up the 7.3V to 25V for the motor driver. Aside from entertaining friends by levitating tiny pieces of plastic, the DIY tractor beams have many possible use cases, particularly in biological research. However, there are some limitations. Given the challenge of suspending objects more than half the wavelength of sound, the gadget can only trap things around a few millimeters in size. Marzo and his team have published their project in the journal Applied Physics Letters, and shared step-by-step instructions so you could get started on building your own beam. You can read more on Phys.org as well. This entry was posted by Arduino Team on Tuesday, January 3rd, 2017 and is filed under 3D Printing, Arduino, Featured, Nano. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

2016 best year ever for funding robotics startup companies

$
0
0

It was a busy and abundant year for seed, crowd, series A,B,C,D and VC funding of robotics-related startups. 128 companies got funded, some multiple times. $1.95 billion, 50% more than 2015 which was also a phenomenal year with over $1.32 billion funded. Velodyne LiDAR got the most money in 2016, $150 million, with Zymergen and UBTech also getting over $100 million each. In monthly recaps by The Robot Report, fundings grew until they peaked in August and then dipped in December. August was the month when Velodyne got their $150 million and Quanergy got $90 million. The next highest in August was $35 million for FormLabs. The colorful chart on the left recaps the number of companies receiving funding by application area. 25 unmanned aerial systems companies got the biggest number of fundings followed by 15 agricultural robotics startups, service robotics for businesses, service robots for personal use, vision systems providers, self-driving systems and mobile robotics and AGVs companies, plus a whole bunch of smaller categories. Investments in robotic solutions for the ag industry were noted by Rob Leclerc of AgFunder who said: “The number of deals grew 7% year-over-year, as we recorded 307 deals this half compared to 286 during the first half of 2015. The number of investors coming into the sector climbed 52% from 280 in the first half of 2015 compared with 425 in the first half of 2016, which suggests that investors are getting more comfortable with the sector.” The following companies were also funded but didn't provide details: Raptor Maps, OnFarm, Appolo Shield, Aarav Unmanned Systems, AIO Robotics, Aloi Material Handling, Aurora Flight Sciences, Kimera Systems, NVBots, OPS-Ingersoll, OptoForce and Square Robot. [If you have information about robotics-related fundings, or corrections or additions to that which is reported above, please email it to info@therobotreport.com. Thank you.] « Return to The Robot Report

Solar cells implanted under the skin can power pacemakers

$
0
0

MedicalIt can be a hassle when your phone's battery runs out of juice and you have to hunt down a power outlet to recharge, but a flat battery is an even bigger hassle in implanted electronic medical devices, such as pacemakers. It often means invasive surgery to replace the battery or the entire unit, but now a new study has found that the use of solar cells implanted under the skin to power medical implants is a feasible approach. The potential advantages of powering medical implants using solar energy to avoid the problems of replacing or recharging batteries has seen various research groups develop small solar cell prototypes that could be implanted under the skin and harness the energy of the light that penetrates the skin's surface to keep medical implants powered up. To examine the feasibility of such technology, a team led by Lukas Bereuter of Bern University Hospital and the University of Bern in Switzerland developed 10 solar measurement devices that could be worn on the arm. The devices featured solar cells measuring 3.6 cm2 (0.5 in2) in size, which is small enough for implantation, and were able to measure the output power the cells generated. Optical filters also covered the cells to simulate the properties of the skin and its effect on the incoming light. A total of 32 volunteers in Switzerland wore the devices for a week each during all four seasons of the year. The researchers found that regardless of the time of year, even the individual that returned the lowest power output was still able to generate 12 microwatts of electricity on average, which is more than the five to 10 microwatts required by a typical pacemaker. "The overall mean power obtained is enough to completely power for example a pacemaker or at least extend the lifespan of any other active implant," says Bereuter. "By using energy-harvesting devices such as solar cells to power an implant, device replacements may be avoided and the device size may be reduced dramatically." In addition to pacemakers, Bereuter thinks the technology could be scaled up for other solar-powered applications on people, with the total surface area of the solar cell, its placement and efficiency and the thickness of a patient's skin all aspects that would need to be taken into account. The team's findings appear in Annals of Biomedical Engineering. Source: Springer

Lego Boost is robot building for the rest of us

$
0
0

LAS VEGAS — Vernie considers me with his icy blue yes, an orange eyebrow slightly cocked. Then, suddenly, he races forward and asks me my name. I shout it into the nearby tablet and we commence bonding. Okay, we don’t so much bond as I command and Vernie, a new Lego robot, responds.  SEE ALSO: This 'Terminator 2' action figure would be greatest gift of all time Vernie is one of five models that children ages 7-to-12 can build and program with Lego’s new Boost kit. The new, 543-piece Lego set, introduced this week at CES in Las Vegas, is like the younger brother to the Lego Mindstorms EV3 robot-building kit. That set is relatively complex, uses special pieces and requires a lot more patience on the part of kids and parent to work out even the most basic programs. Doing so pays dividends, but if you’ve never coded before, it can be a little off-putting.  Boost eases that process in almost every way. The Lego pieces are all traditional and you can even use your existing Lego set pieces with it and the coding is largely confined to code-blocks that already have lots of instructions baked in and fully compiled. On the accompanying iPad app, builders need only select and attach puzzle-like programming blocks that handle instructions like canned responses, turning 90-degrees, making the iPad listen and controlling light colors. They can daisy-chain them together and even build separate program block chains that, with the press of a virtual green “play” button, all run at once.  On the build side, the app guides children through the process of creating Vernie, by breaking down his construction (he uses virtually all the set’s pieces) into discrete parts. The set will not ship with printed instructions, all the build guides reside in the app. After the head and upper torso are built, Vernie will ask about going for a walk and even try to shimmy along. Realizing he has no legs, the robot and app will guide builders to construct the tread base and attach it to Vernie. “It may be the first time a Lego model has encourage a kid to keep building,” said Lego Design Director Simon Kent who was guiding our demo (and kept programming his own voice into it for comic effect). Like Mindstorms, Boost ships with a central hub that includes Bluetooth to connect to the iPad, a pair of motors to drive wheels and motion and two input/output ports. It's powered by six AA batteries. There’s also another motor, which includes a tachometer so it can be used as an input device; it knows when it’s being turned, which direction, how fast and by how much. A sensor module helps robots see distance, movement and colors. Each module and motor is programmable through the app.  Using the included Lego pieces, motors and sensors, you can build a cat, a guitar, a tractor, a quadruped and Vernie. Vernie is the most complex of all the builds. Each one guides users through the building and programming process in a step-by-step fashion, where each completed task unlocks a new one. Tasks, though, are not dry and don’t feel like programming work. Vernie can be programmed to tell jokes or rap and the cat can be playful and programmed to drink virtual milk (and then digitally fart because it’s lactose intolerant). Different tasks teach different programming skills. The app introduces the idea of how to program the hub’s built-in accelerometer by having you flip the cat on its back or pick it up and have each action trigger a different response. There is room in Lego Boost to grow as a programmer. While most blocks shield children from actual programming, there are some that offer parameter control, like the speed of revolution on a motor (25% as opposed to 100%) and even allow young programmers them to nest multiple blocks in one block via drag-and-drop so that placing it in the chain triggers a much larger set of actions. After Verine and I got acquainted and we attached a Lego RPG, the 10-inch tall robot showed me his shooting skills (all triggered through app programming). Vernie shot at a target and at people in the room. We even used programming to make Verne shoot only when we clapped. While Vernie can talk, all the sound effects and the listening capability are housed in the iPad app. Not sure how that discontinuity will play out for kids. We saw the kit, which ships this summer for $159.99, in a beta stage. While Vernie’s parts and functions look mostly complete, the app was full of stock graphics and unfinished animations. Even so, what you could do with this early version was impressive. Like Apple’s Swift Playgrounds, which seeks to simplify the task of teaching programming to kids, Lego Boost may succeed because it’s entertaining and engaging first, with low programming bar to entry to gain access to and enhance the Lego robot actions. For now, it looks like the perfect step ladder to the richer and deeper Lego Mindstorms.

What to expect of artificial intelligence in 2017

$
0
0

Expect to see better language understanding and an AI boom in China, among other things. by Will Knight January 4, 2017 Last year was huge for advancements in artificial intelligence and machine learning. But 2017 may well deliver even more. Here are five key things to look forward to. Positive reinforcement AlphaGo’s historic victory against one of the best Go players of all time, Lee Sedol, was a landmark for the field of AI, and especially for the technique known as deep reinforcement learning. Reinforcement learning involves having a machine learn to solve a problem not through programming or explicit examples, but through experimentation combined with positive reinforcement. The idea has been around for decades, but combining it with large (or deep) neural networks provides the power needed to make it work on really complex problems (like the game of Go). Through relentless experimentation, as well as analysis of previous games, AlphaGo figured out for itself how play the game at an expert level. The hope is that reinforcement learning will now prove useful in many real-world situations. And the recent release of several simulated environments should spur progress on the necessary algorithms by increasing the range of skills computers can acquire this way. In 2017, we are likely to see attempts to apply reinforcement learning to problems such as automated driving and industrial robotics. Google has already boasted of using deep reinforcement learning to make its data centers more efficient. But the approach remains experimental, and it still requires time-consuming simulation, so it’ll be interesting to see how effectively it can be deployed. Dueling neural networks At the banner AI academic gathering held recently in Barcelona, the Neural Information Processing Systems conference, much of the buzz was about a new machine-learning technique known as generative adversarial networks. Invented by Ian Goodfellow, now a research scientist at OpenAI, generative adversarial networks, or GANs, are systems consisting of one network that generates new data after learning from a training set, and another that tries to discriminate between real and fake data. By working together, these networks can produce very realistic synthetic data. The approach could be used to generate video-game scenery, de-blur pixelated video footage, or apply stylistic changes to computer-generated designs. Yoshua Bengio, one of world’s leading experts on machine learning (and Goodfellow’s PhD advisor at the University of Montreal), said at NIPS that the approach is especially exciting because it offers a powerful way for computers to learn from unlabeled data—something many believe may hold the key to making computers a lot more intelligent in years to come. China’s AI boom This may also be the year in which China starts looking like a major player in the field of AI. The country’s tech industry is shifting away from copying Western companies, and it has identified AI and machine learning as the next big areas of innovation. China’s leading search company, Baidu, has had an AI-focused lab for some time, and it is reaping the rewards in terms of improvements in technologies such as voice recognition and natural language processing, as well as a better-optimized advertising business. Other players are now scrambling to catch up. Tencent, which offers the hugely successful mobile-first messaging and networking app WeChat, opened an AI lab earlier this year, and the company was busy recruiting talent at NIPS. Didi, the ride-sharing giant that bought Uber’s Chinese operations earlier this year, is also building out a lab and reportedly working on its own driverless cars. Chinese investors are now pouring money into AI-focused startups, and the Chinese government has signaled a desire to see the country’s AI industry blossom, pledging to invest about $15 billion by 2018. Language recognition Ask AI researchers what their next big target is, and they are likely to mention language. The hope is that techniques that have produced spectacular progress in voice and image recognition, among other areas, may also help computers parse and generate language more effectively. This is a long-standing goal in artificial intelligence, and the prospect of computers communicating and interacting with us using language is a fascinating one. Better language understanding would make machines a whole lot more useful. But the challenge is a formidable one, given the complexity, subtlety, and power of language. Don’t expect to get into deep and meaningful conversation with your smartphone for a while. But some impressive inroads are being made, and you can expect further advances in this area in 2017. Backlash to the hype As well as genuine advances and exciting new applications, 2016 saw the hype surrounding artificial intelligence reach heady new heights. While many have faith in the underlying value of technologies being developed today, it’s hard to escape the feeling that the publicity surrounding AI is getting a little out of hand. Some AI researchers are evidently irritated. A launch party was organized during NIPS for a fake AI startup called Rocket AI, to highlight the growing mania and nonsense around real AI research. The deception wasn’t very convincing, but it was a fun way to draw attention to a genuine problem. One real problem is that hype inevitably leads to a sense of disappointment when big breakthroughs don’t happen, causing overvalued startups to fail and investment to dry up. Perhaps 2017 will feature some sort of backlash against the AI hype machine—and maybe that wouldn’t be such a bad thing. Gain the insight you need on digital technologies at EmTech Digital. Register now

Trump Attacks of U.S. Intelligence on Russia Unnerve Lawmakers

$
0
0

Updated Jan. 4, 2017 2:05 p.m. ET WASHINGTON—President-elect Donald Trump’s increasingly heated attacks against U.S. intelligence agencies and his continuing praise for Russian President Vladimir Putin have unnerved lawmakers from both parties, with some questioning his goals in dealing with a top U.S. adversary. Mr. Trump’s animosity toward intelligence agencies has intensified in recent weeks after the Central Intelligence Agency and others reached an assessment... To Read the Full Story, Subscribe or Sign In

Squishy Clockwork Biobot Could Dose You With Drugs from the Inside

$
0
0

When Swiss watchmakers invented the Geneva drive, a two-geared mechanism that produces precise ticks forward, they probably never imagined that bioengineers would one day craft a 15-mm version out of squishy hydrogel. But then, they weren’t trying to make a biocompatible micromachine that could be implanted in the body to deliver doses of drugs.  This strange new biobot comes from the lab of Samuel Sia, a professor of biomedical engineering at Columbia University, in New York City. It uses neither battery nor wires, and can be controlled from outside the body to deliver a dose on command. It’s a gadget well suited for this new era of personalized medicine, Sia tells IEEE Spectrum. “Doctors want to see how the patient is doing and then modify the therapy accordingly,” he says.  He has already tested the gizmo in lab mice with bone cancer, with exciting results that were published today in the journal Science Robotics. More on that experiment later.   Sia’s team first had to invent a type of 3D printing to fabricate their tiny Geneva drive and several other soft micromachines. They came up with a fabricator that lays down layers of a hydrogel to produce rubbery solid shapes. While human hands are required to put the pieces together, Sia says those assembly steps could be automated. And it’s pretty quick, as is: The whole process of printing and assembling one Geneva drive takes less than 30 minutes. Today’s typical 3D printers would take several hours to construct a similar device, Sia says, and most can’t handle soft materials like hydrogel.  Here’s the part that works like clockwork! The squishy Geneva drive clicks forward when an external magnet moves a simple gear, which is just a rubbery piece with embedded iron nanoparticles (the black curved piece in the video below). With each click, one of six chambers lines up with a hole and a dose of medicine flows out. In the video, a magnet (the silver disk) keeps the device running continuously to demonstrate the mechanism, but in clinical use, a doctor could apply a magnet only when a dose is required.  You may be wondering: Could someone’s implanted micromachine be triggered accidentally by an external magnet or by a malicious person with fiendish magnetic powers? In other words, is the X-Men’s Magneto a risk factor? “Somebody walking by with a magnet won’t trigger it, but there are some cases where it’s not ideal,” Sia says. His lab is working on other ways to wirelessly drive the mechanism, including an ultrasound technique.  The hardest part of the design process was getting the material right, Sia says. Very flexible and soft materials are compatible with the body’s soft innards, unlike rigid silicon or metal devices. “But if your material is collapsing like jello, it’s hard to make robots out of it,” he says. “It has to be stiff enough to work like a tiny implantable machine.”  The next step was in vivo. Sia’s team wanted to see if their devices would work inside the body, with all the complications of chemistry and anatomy. Some mice with bone cancer received implanted devices that were loaded up with a chemo drug, other mice received typical chemotherapy, which floods the whole body with a toxic drug. When they compared the effects of the device’s localized and periodic delivery of the drug to the typical treatment, the results were impressive. The bionic mice’s tumors grew slower, more tumor cells died off, and fewer cells elsewhere in the body suffered peripheral damage. The clinical possibilities seem obvious—oncologists could deliver more targeted and concentrated doses of powerful chemo drugs, and Sia imagines other uses, like regulating the release of hormones. But the drug delivery device is really just a proof of concept, he says. He’s not rushing out to form a startup: “We have to do the cost-benefit analysis to see if this is really a commercializable device,” he says. He is bullish, however, on the medical potential of tiny squishy robots in general. Soft and mobile little bots could one day act as internal repair crews, doing a doctor’s work from the inside. (For more on this, check out IEEE Spectrum’s article on medical microbots). Sia says his fabrication platform is capable of turning out a wide variety of devices. “I’m confident that we’ll find something useful,” he says. Sia won’t say exactly what types of devices his lab is now experimenting with, except to say that they’re looking at implanted devices that move. Here’s my guess: It’s a tiny squishy micromachine that resembles a cuckoo clock.

Play digital music on this analog interface

$
0
0

“I’m a big fan of digital music, especially Spotify. The ability to dial-up a much loved song I’ve not heard for ages or discover new music are just some of the benefits I never tire of,” writes UK-based designer Brendan Dawes. “Yet the lack of physicality to this digital medium has always left me wanting. I still own vinyl and a turntable and I love the ritual of physically flicking through what to place on the platter and then wait for the needle to drop on the spinning vinyl.” To bridge the gap between the digital and analog worlds, Dawes decided to create what he calls the “Plastic Player.” The playful interface features a Raspberry Pi running Pi MusicBox connected to his 50-year-old B&O stereo, and an Arduino Yún with an NFC shield. The “albums” themselves are made from a box of slide mounts with tiny NFC stickers on the back. When Dawes drops one in place, the Arduino identifies the tag, matches it to a specific record, turns on a backlight, and then communicates via WiFi with the Pi MusicBox API to play the tunes. Removing the cartridge from the device pauses the track. But that’s not all. There are also three buttons on top, which can be used to skip, go back, or stop a song. It’s often easy to romanticise the past, convincing ourselves that things were better back then when really I think that’s just not the case. I’ve discovered way more music since moving to Spotify then I ever did in record shops. What I do like though is the physicality of choosing an album to play and this system is an attempt to blend the good parts of both worlds. The future will continue to be digitised and I embrace that, but I think there’s a space in between the digital and the analog to create interactions that are filled with the inconvenience of what it is to be human. You can read more about the Plastic Player on Dawes’ website, and see it in action below! (Photos: Brendan Dawes) This entry was posted by Arduino Team on Wednesday, January 4th, 2017 and is filed under Arduino, Featured, Yún. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Nvidia And Audi Say They'll Field a Level 4 Autonomous Car in Three Years

$
0
0

Jen-Hsun Huang, the CEO of Nvidia, said last night in Las Vegas that his company and Audi are developing a self-driving car that will finally be worthy of the name. That autonomous vehicle, he said, will be on the roads by 2020. Huang made his remarks in a keynote address at CES. Then he was joined by Scott Keough, the head of Audi of America, who emphasized that the car really would drive itself. “We’re talking highly automated cars, operating in numerous conditions, in 2020,” Keough said. A prototype based on Audi’s Q7 car was, as he spoke, driving itself around the lot beside the convention center, he added. This implies the Audi-Nvidia car will have “Level 4” capability, needing no human being to supervise it or take the wheel on short notice, at least not under “numerous” road conditions. So, maybe it won’t do cross-country moose chases in snowy climes. These claims are pretty much in line with what other companies, notably Tesla, have been saying lately. The difference is in the timing: Nvidia and Audi have drawn a hard deadline three years from now. In a statement, Audi said that it would introduce what it called the world's first Level 3 car this year; it will be based on NVIDIA computing hardware and software. Level 3 cars can do all the driving most of the time, but require that a human be ready to take over. At the heart of Nvidia’s strategy is the computational muscle of its graphics processing chips, or GPUs, which the company has honed over decades of work in the gaming industry. Some 18 months ago, it released its first automotive package, called Drive PX, and today it announced the successor to it, called Xavier. (That Audi in the parking lot uses the older, Drive PX version.) “[Xavier] has 8 high-end CPU cores, 512 of our next-gen GPUs,” he said.  “It has the performance of a high-end PC shrunk onto a tiny chip, [with] teraflop operation, at just 3o watts.” By teraflop he meant 30 of them: 30 trillion operations per second, 15 times as much as the 2015 machine could handle. That power is used in deep learning, the software technique that has transformed pattern recognition and other applications in the past three years. Deep learning uses a hierarchy of processing layers that make sense of a mass of data by organizing it into progressively more meaningful chunks. For instance, it might begin in the lowest layer of processing by tracing a line of pixels to infer an edge. It might proceed up to the next layer up by combining edges to construct features, like a nose or an eyebrow. In the next higher layer it might notice a face, and in a still higher one, it might compare that face to a database of faces to identify a person. Presto, and you have facial recognition, a longstanding bugbear of AI. And, if you can recognize faces, why not do the same for cars, sign posts, roadsides and pedestrians? Google’s Deep Mind, a pioneer in deep learning, did it for the infamously difficult Asian game of Go last year, when its Alphago program beat one of the best Go players in the world. In Nvidia’s experimental self-driving car, dozens of cameras, microphones, speakers and other sensors are strewn around the outside and also the inside. Reason: Until full autonomy is achieved, the person behind the wheel will still have to stay focused on the road, and the car will see to it that he is. “The car itself will be an AI for driving, but it will also be an AI for co-driving—the AI co-pilot,” Huang said. “We believe the AI is either driving you or looking out for you. When it is not driving you it is still completely engaged.” In a video clip, the car warns the driver with a natural-language alert: “Careful, there is a motorcycle approaching the center lane,” it intones. And when the driver—an Nvidia employee named Janine—asks the car to take her home, it obeys her even when street noise interferes. That’s because it actually reads her lips, too (at least for a list of common phrases and sentences). Huang cited work at Oxford and at Google’s Deep Mind outfit showing that deep learning can read lips with 95 percent accuracy, which is much better  than most human lip-readers. In November, Nvidia announced that it was working on a similar system. It would seem that the Nvidia test car is the first machine to emulate the ploy portrayed in “2001: a Space Odyssey,” in which the HAL 9000 AI read the lips of astronauts plotting to shut the machine down. These efforts to supervise the driver so the driver can better supervise the car is directed against Level 3’s main problem: driver complacency. Many experts believe that this is what occurred with the driver of the Tesla Model S that crashed into a truck. Some reports say he failed to override the vehicle’s decision making because he was watching a video. Last night, Huang also announced deals with other auto industry players. Nvidia is partnering with Japan’s Zenrin mapping company, as it has done with Europe’s TomTom and China’s Baidu. Its robocar computer will be manufactured by ZF, an auto supplier in Europe; commercial samples are already available. And it is also partnering with Bosch, the world’s largest auto supplier. Besides these automotive initiatives, Nvidia also announced new directions in gaming and consumer electronics. In March, it will release a cloud-based version of its GeForce gaming platform on Facebook that will provide a for-fee service through the cloud to any PC loaded with the right client software. This required that latency, the delay in response from the cloud, be reduced to manageable proportions. Nvidia also announced a voice-controlled television system based on Google’s Android system. The common link among these businesses is Nvidia’s prowess in graphics processing, which provides the computational muscle needed for deep learning. In fact, you might say that deep learning—and robocars—came along at just the right time for the company: It had built up stupendous processing power in the isolated hothouse of gaming and needed a new outlet for it. Artificial intelligence is that outlet.

Forbes Welcome

$
0
0

More Options "Every door can be unlocked."

Social Robots for Home and Health

This adorable new home robot wise, watchful and doesn’t judge

$
0
0

LAS VEGAS — The history of home robotics is littered with the carcasses of unmet promises and potential. It's 2017 and we're still not close to having a Rosie the Robot or C-3PO in our homes. Kuri, the autonomous home robot, might change that. SEE ALSO: Mark Cuban publicly asks Trump to become the robotics president The 14-pound, 20-inch robot from Mayfield Robotics makes its debut this week at CES in Las Vegas. There’s no complicated touch screen or even an animated face. Instead, the rolling bot has a round head that can look up at you with two simple eyes (it even has plastic eyelids) and a cone-like body with a pair of what appear to be fixed, gray plastic arms.  The appearance is both cute and disarming. No uncanny valley problems here. However, Kuri may succeed because of the surprising level of intelligence packed inside at what is, for a robot of its kind, a relatively affordable price: $699. “For generations, people have dreamed of having their own personal robot in the home, and we’ve been focused on making that dream more of a reality,” said Sarah Osentoski, COO and co-founder of Mayfield Robotics, in a release announcing the new home bot. Kuri can hear, speak, navigate its own environment and avoid obstacles. It also allows for simple programming through third-party apps like IFTTT. It will ship with free iOS and Android apps. The robot’s hearing is powered by a 4-microphone array, which should help it hear commands spoken in a normal tone of voice. There’s also a 1080p camera that you can use to see what Kuri is seeing when you’re not home. As a result, Kuri could not only watch your house, but help you keep tabs on stay-at-home pets. Kuri’s internal mapping and sensor system helps it navigate your home, but also avoid objects left on the floor and stair cases that it would otherwise fall down. Kuri moves about on a set of wheels that Mayfield claims can handle a wide variety of floor surfaces. It’s also designed to be self-sufficient, finding its charger and docking whenever it runs low on power. Mayfield promises “hours of battery life,” but does not specify how long Kuri will operate on a charge. Mayfield designed Kuri to be both an entertainment and utility robot. As a result, it can connect to smart homes and be programmed to perform tasks.  The robot can follow you around your home and use its built-in Bluetooth connectivity and dual speakers to play music or podcasts. It can also read stories to your children. Kuri sounds like a useful home companion, and Mayfield did an excellent job of highlighting its strengths in a series of promotional videos (especially the one above where Kuri watches as its owner comes home after a big night out on the town).  Unfortunately, the robot only goes on pre-order on Tuesday and will not ship until late this year.  That’s a story we’ve heard for home robot companion manufacturers before (looking at you Jibo). Then they get delayed and delayed again, usually getting more expensive in the process. Ultimately, they never ship, or ship so long after the initial blush of interest that no one cares. Let’s hope Kuri avoids that fate.

Lego Boost teaches kids to bring blocks to life with code

$
0
0

If you've ever wished your childhood Lego creations could come to life, your dreams are now closer to reality. Lego has just unveiled a new subbrand called Boost which promises to do just that. The base set contains a combination of sensors, motors and a unique companion app that teaches kids how to code so that they can program their new robot friends. Lego's Mindstorms could let you do this too, but that's a decidedly more advanced system aimed at young adults. Boost, on the other hand, is designed for kids ages seven and up. The Lego Boost base starter set is priced at $160 and will be available later this year. The first Lego Boost product is what the company is calling a "creative toolbox," which contains three Boost bricks plus 840 other Lego blocks. The core unit is the Move Hub, which contains a six-axis tilt sensor, two input and output ports, a power button and a light that changes color. It's powered by six AAA batteries and is covered in the usual Lego studs so that kids can build on top of it. Other Boost bricks include a combination color and distance sensor and an interactive motor. The motor has a tachometer in it, which tells the software how much it's turned and at what speed. This, Lego says, allows for finite control and more minute movements. The set also comes with building instructions for five models: Vernie the Robot, Frankie the Cat, the Guitar 4000, the Multi-Tool Rover 4 (essentially construction-type vehicle), and an Autobuilder, which is a machine that builds tiny Lego creations for you. But before you can build any of those, you have to download the companion Boost app. The app is essential to the process; it has all of the instructions plus it's the key method of programming and interacting with the Boost creations. Once you get the app, it will ask you to create a "Getting Started" vehicle, which is really just the three Boost blocks put together. This is basically a tutorial mode that walks the kids through the Bluetooth pairing process and familiarizes them with the app and the hardware. They'll immediately get into the coding interface -- which consists of drag-and-drop modules -- and learn how to make their little vehicle move around. Then the kids can choose whichever of the five models they want to build. When they do, the app asks them to build their creation step by step. With Vernie the Robot, for example, you'll first create the head, then the upper torso, shoulders and then you'll be instructed to plug in the Move Hub. Press the green button and the robot comes to life. Because the app knows you're building the robot instead of the other creations, it immediately assumes the character of Vernie and start talking, asking for your name and introducing itself. It will then suggest going for a drive, but because you haven't built his tracks yet, it will just vibrate. Vernie will then prompt you to complete his build. Simon Kent, Lego Boost's lead designer, joked that this is probably the first time a Lego creation has told you to continue building it. Indeed, Vernie has a lot of built-in charm. Its head moves when it talks to you, and when you shake its hand (thus triggering the tilt sensor), it greets you like a friend. Kent says this is part of what makes the Boost toys feel so personal and alive. "You don't need to program those aspects in or code from scratch," he says. "It's much easier than Mindstorms." From there, it's a matter of coding the robot to do what you want. The app has a freeplay area that lets you code your creation with all kinds of different modules -- the green ones indicate movement, the purple is for speech (it uses the tablet's microphone and speakers for audio) and the blue ones are for action. The code is horizontal, and runs from left to right, so it's easy for kids to grasp. Plus, the modules are icon-, not text-based, so you don't have to know how to read. What I really like about the Boost app are the activities. There's a Western-style one, for example, where you can outfit Vernie with a handlebar mustache and a little shooter gun. You can also build a target for Vernie to shoot at. The app will then prompt you to compile a code string where Vernie will shoot whenever it hears a clap. Start the activity, and Vernie will pivot around emitting a radar-like sound. Clap, and Vernie will stop and shoot its tiny Lego bullet. I tried this out in a demo, and it worked quite well, though sometimes it would trigger even at the slightest sound. "That could just be because of the app," said Kent, adding that it was still in beta. As the child plays through these activities, they'll learn about new coding functions. So the more activities they do, the more coding modules they'll accumulate. One particularly funny Vernie activity is to, well, pull its finger. When it does, it'll emit a farting sound. "It's immature, but kids love it," said Kent. There's another one where Vernie dances to music, and whenever you clap, it'll spin. The clap will even activate the light on its chest to change like a disco ball. The other Boost creations are pretty great too. The Guitar 4000 lets you play your own music, the MultiTool Rover is a vehicle that can be any tool you wish and the AutoBuilder uses a grid reference palette to put together Lego bricks for you. I was particularly taken with Frankie the Cat. It starts off as a kitten, which meows and purrs as you cradle it. From there you can program it to instill in it all the various characteristics of a cat. Feed it from a "milk bottle" and it will purr even more (the milk bottle has an orange tip, which triggers the color sensor in the cat's "mouth." Give it too much milk, and it will fart. Lift it by its tail and it will meow angrily. Its eyebrows twitch and its tail wags. You can even leave the program running in the background so you have to figure out what is it that the cat is meowing after -- does it want milk, or a rub? "It's a nurturing, Tamagotchi style of play," said Kent. But what makes Lego Boost especially amazing is that you can use it with your existing Lego bricks. That means that if you have a Lego Ninjago set lying around, you can totally use that with Boost too. Boost lets you build three different bases -- a walking base to create their own robot animal, a driving base for a vehicle and an entrance base for a castle or a fort. Lego calls this a "creative canvas" that encourages kids to think creatively and use their Boost bricks and coding skills with all manner of different creations. "We know that children dream of bringing their Lego creations to life," said Kent in a statement. "Our chief ambition for Lego Boost is to fulfill that wish." Click here to catch up on the latest news from CES 2017.

Amazon’s robot workforce grows by 50pc in just one year

$
0
0

E-commerce and cloud giant Amazon has revealed that it now has 45,000 robots across 20 fulfilment centres around the world. This is a 50pc increase on the same time last year, when the company said that it employed 30,000 robots alongside its 306,000 people. Amazon uses the robots to automate the picking and packing process at large warehouses. The robots are 16in tall and weigh 145kg. They can travel at 5mph and can carry packages that weigh 317kg. The robots became part of the company’s workforce when Amazon acquired Kiva Systems in 2012 for $775m. According to The Seattle Times, the pace of growth is keeping pace with that of human capital, whereby Amazon has seen its own human workforce grow by around 46pc in the 2016 calendar year. Although the rise in Amazon’s robot army might be alarming in terms of human job loss, the reality is that while robots are good for moving objects from A to B, human dexterity is still essential. But that may change. It emerged yesterday (3 January 2017) that in Japan, insurance company Fukoku Mutual is to cut the workforce of its claims department by 30pc, using AI to validate medical claims. This could be just the beginning for Amazon. In recent weeks, Amazon made its first automated drone delivery in the UK and the company has also obtained a patent for an airborne fulfilment centre. This will, in effect, be a floating zeppelin airship that will be a kind of mothership for the worker drones. You’ve been warned.

Electric Fields Fight Deadly Brain Tumors

$
0
0

Jessica Morris was on a hiking trail in upstate New York last January when she suddenly uttered a line of gibberish and fell to the ground, her body shaking in a full seizure. A few hours later in a hospital she learned that she had glioblastoma, an aggressive brain tumor, and several days after that she was on the operating table having brain surgery. Since then, she’s been fighting for her life. She’s grateful to have a radical new weapon in her arsenal, one that only became available to patients like her in 2015. She wears electrodes on her head all day and night to send an AC electric field through her brain, trying to prevent any leftover tumor cells from multiplying. She’s been wearing this gear for about six months so far. “I think it’s brilliant,” Morris says. “I’m proud to wear it.” The Optune device from Novocure, an international company with R&D operations in Haifa, Israel, can’t exactly be called convenient or unobtrusive. Morris goes about her business with a shaved head plastered with electrodes, which are connected by wires to a bulky generator she carries in a shoulder bag. Every few days her husband helps her switch out the adhesive electrode patches, which requires reshaving her head, making sure the skin of her scalp is healthy, and applying the new patches. Morris isn’t complaining about the effort. “If you have a condition which has no cure, it’s a great motivator,” she says dryly. Doctors typically combat glioblastoma with the triad of surgery, radiation, and chemotherapy. Optune’s tumor-treating fields (TTFs) offer an entirely new type of treatment. Unlike chemo, this electrical treatment doesn’t cause collateral damage in other parts of the body. Yet the technology has been slow to catch on. “The adoption rate has not been stellar to date,” admits Eilon Kirson, Novocure’s chief science officer. He’s hoping the most recent results of Optune’s biggest clinical trial yet will make the difference: Two years after beginning treatment, 43 percent of 695 patients with glioblastoma who used Optune were still alive, compared to 30 percent of patients on the standard treatment regimen. Four years out, the survival rates are 17 percent for Optune patients and 10 percent for the others. “To patients, that’s a big difference,” Kirson­ says. “That’s worth fighting for.” Many oncologists, however, still hesitate to prescribe Optune. Wolfgang Wick, a professor of neuro-oncology at the University of Heidelberg, in Germany, has written skeptically about TTFs and says the long-term results don’t change his outlook. He draws a contrast with the chemotherapy drug temozolomide, which provides a clear benefit to a subset of patients who have a particular biomarker. Doctors don’t know which patients will respond best to the electric fields, he says, and that makes Optune a less appealing treatment option. “If I listen to my patients, this is one thing missing with the TTF, and this has not changed,” Wick says. Novocure executive chairman William Doyle argues that every weapon should be deployed against brain tumors, which are notoriously tough to fight. The last advance in treatment came about 15 years ago when doctors introduced temozolomide. “Since then, every attempt to make an improvement in these patients’ survival rates has failed,” Doyle says. Novocure’s system uses electrodes stuck to the scalp to create a low-­intensity oscillating electric field in the brain, which interferes with cancer cells as they try to divide and multiply. At one moment in the cell division process, the cell distorts into an hourglass shape. That’s when the tumor-treating field has its impact because the cell’s geometry concentrates the TTF at the center of the hourglass. The TTF works on molecules inside the cell that are polarized and respond to electric fields. By pulling those polarized molecules out of their proper positions, the field interferes with the precise procedures of cell division. Or, as Doyle puts it, “all hell breaks loose inside the cell.” The cells don’t divide and may even go into a state of programmed cell death. So why do the TTFs damage tumor cells while leaving normal cells unharmed? Doyle says the secret lies in the frequency of the electric field. Each different cell type has a membrane with specific filtering properties, allowing only certain frequencies to penetrate it. (You can think of the cellular membrane as a capacitor, Doyle says; at the right frequency, the field can go through it with very little impedance.) Optune uses a frequency of 200 kilohertz to get inside glioblastoma cells, but the frequency doesn’t penetrate neurons and other normal brain cells. It doesn’t hurt that there’s very little cell division going on in the brain. That’s not an advantage Novocure will have as it looks to push Optune into treating cancer in other parts of the body, such as the pancreas, ovaries, and lungs, where normal healthy cells also divide frequently. Jessica Morris hopes many other brain cancer patients will try the Optune, despite the uncertainties that come with a new medical technology. Her doctor originally recommended a nine-month treatment plan but recently revised that estimate, saying she might want to keep the gear on for two years. That change reflects her promising medical outlook: Her MRI scans haven’t shown any new brain tumor growth, and Morris wants to keep it that way. “If I’m still doing well, why would I take it off?” she says.

10 Tech Trends That Made the World Better in 2016

$
0
0

2016 was an incredible year for technology, and for humanity. Despite all the negative political-related news, there were 10 tech trends this year that positively transformed humanity. For this “2017 Kick-Off” post, I reviewed 52 weeks of science and technology breakthroughs, and categorized them into the top 10 tech trends changing our world. I’m blown away by how palpable the feeling of exponential change has become. I’m also certain that 99.99% of humanity doesn’t understand or appreciate the ramifications of what is coming. In this post, enjoy the top 10 tech trends of the past 12 months and why they are important to you. Let’s dive in… In 2010, 1.8 billion people were connected. Today, that number is about 3 billion, and by 2022 – 2025, that number will expand to include every human on the planet, approaching 8 billion humans. Unlike when I was connected 20 years ago at 9,600 baud via AOL, the world today is coming online at one megabit per second or greater, with access to the world’s information on Google, access to the world’s products on Amazon, access to massive computing power on AWS and artificial intelligence with Watson… not to mention crowdfunding for capital and crowdsourcing for expertise. Looking back at 2016, you can feel the acceleration. Here are seven stories that highlight the major advances in our race for global connectivity: a) Google’s 5G Solar Drones Internet Service: Project Skybender is Google's secretive 5G Internet drone initiative. News broke this year that they have been testing these solar-powered drones at Spaceport America in New Mexico to explore ways to deliver high-speed Internet from the air. Their purported millimeter wave technology could deliver data from drones up to 40 times faster than 4G. b) Facebook’s Solar Drone Internet Service: Even before Google, Facebook has been experimenting with a solar-powered drone, also for the express purpose of providing Internet to billions. The drone has the wingspan of an airliner and flies with roughly the power of three blowdryers. c) ViaSat Plans 1 Terabit Internet Service: ViaSat, a U.S.-based satellite company, has teamed up with Boeing to launch three satellites to provide 1 terabit-per-second Internet connections to remote areas, aircraft and maritime vehicles. ViaSat is scheduled to launch its satellite ViaSat2 in early 2017. d) OneWeb Raises $1.2B for 900 Satellite Constellation: An ambitious low-Earth orbit satellite system proposed by my friends Greg Wyler, Paul Jacobs and Richard Branson just closed $1.2 billion in financing. This 900-satellite system will offer global internet services as soon as 2019. e) Musk Announces 4,425 Internet Satellite System: Perhaps the most ambitious plan for global internet domination was proposed this year by SpaceX founder Elon Musk, with plans for SpaceX to deploy a 4,425 low-Earth orbit satellite system to blanket the entire planet in broadband. We’ve just exceeded a historic inflection point. 2016 was the year solar and renewable energy became cheaper than coal. In December, the World Economic Forum reported that solar and wind energy is now the same price or cheaper than new fossil fuel capacity in more than 30 countries. “As prices for solar and wind power continue their precipitous fall, two-thirds of all nations will reach the point known as 'grid parity' within a few years, even without subsidies,” they added. This is one of the most important developments in the history of humanity, and this year marked a number of major milestones for renewable energy. Here’s 10 data points (stories) I’ve hand-picked to hammer home the historic nature of this 2016 achievement. a) 25 percent of the World’s Power Comes From Renewables: REN21, a global renewable energy policy network, published a report showing that a quarter of the world’s power now comes from renewable energy. International investment in renewable energy reached $286 billion last year (with solar accounting for over $160b of this), and it’s accelerating. b) In India, Solar Is Now Cheaper Than Coal: An amazing milestone indeed, and India is now on track to deploy >100 gigawatts of solar power by 2022. c) The UK Is Generating More Energy From Solar Than Coal: For the first time in history, this year the U.K. has produced an estimated 6,964 GWh of electricity from solar cells, 10% higher than the 6,342 GWh generated by coal. d) Coal Plants Being Replaced by Solar Farms: The Nanticoke Generating Station in Ontario, once North America's largest coal plant, will be turned into a solar farm. e) Coal Will Never Recover: The coal industry, once the backbone of U.S. energy, is fading fast on account of renewables like solar and wind. Official and expert reports now state that it will never recover (e.g., coal power generation in Texas is down from 39% in early 2015 to 24.8% in May 2016). f) Scotland Generated 106% Energy From Wind: This year, high winds boosted renewable energy output to provide 106% of Scotland’s electricity needs for a day. g) Costa Rica Ran on Renewables for 2+ Months: The country ran on 100% renewable energy for 76 days. h) Google to Run 100% on Renewable Energy: Google has announced its entire global business will be powered by renewable energy in 2017. i) Las Vegas' City Government Meets Goal of 100% Power by Renewables: Las Vegas is now the largest city government in the country to run entirely on renewable energy. j) Tesla’s Gigafactory: Tesla’s $5 billion structure in Nevada will produce 500,000 lithium ion batteries annually and Tesla’s Model III vehicle. It is now over 30 percent complete… the 10 million square foot structure is set to be done by 2020. Musk projected that a total of 100 Gigafactories could provide enough storage capacity to run the entire planet on renewables. Though it may seem hard to believe, the end of cancer and disease is near. Scientists and researchers have been working diligently to find novel approaches to combating these diseases, and 2016 saw some extraordinary progress in this regard. Here’re my top 10 picks that give me great faith about our abilities to cure cancer and most diseases: a) Cancer Immunotherapy Makes Strides (Extraordinary Results): Immunotherapy involves using a patient’s own immune system (in this case, T cells) to fight cancer. Doctors remove immune cells from patients, tag them with “receptor” molecules that target the specific cancer, and then infuse the cells back in the body. During the study, 94% of patients with acute lymphoblastic leukemia (ALL) saw symptoms vanish completely. Patients with other blood cancers had response rates greater than 80%, and more than half experienced complete remission. b) In China, CRISPR/Cas9 used in First Human Trial: A team of scientists in China (Sichuan University) became the first to treat a human patient with an aggressive form of lung cancer with the groundbreaking CRISPR-Cas9 gene-editing technique. c) NIH Approves Human Trials Using CRISPR: A team of physicians at the University of Pennsylvania's School of Medicine had their project of modifying the immune cells of 18 different cancer patients with the CRISPR-Cas9 system approved by the National Institute of Health. Results are TBD. d) Giant Leap in Treatment of Diabetes from Harvard: For the first time, Harvard stem cell researchers created “insulin producing” islet cells to cure diabetes in mice. This offers a promising cure in humans as well. e) HIV Genes Cut Out of Live Animals Using CRISPR: Scientists at the Comprehensive NeuroAIDS Center at Temple University were able to successfully cut out the HIV genes from live animals, and they had over a 50% success rate. f) New Treatment Causes HIV Infected Cells to Vanish: A team of scientists in the U.K. discovered a new treatment for HIV. The patient was treated with vaccines that helped the body recognize the HIV-infected cells. Then, the drug Vorinostat was administered to activate the dormant cells so they could be spotted by the immune system. g) CRISPR Cures Mice of Sickle Cell Disease: CRISPR was used to completely cure sickle cell by editing the errant DNA sequence in mice. The treatment may soon be used to cure this disease, which affects about 100,000 Americans. h) Eradicating Measles (in the U.S.): The World Health Organization (WHO) announced that after 50 years, they have successfully eradicated measles in the U.S. This is one of the most contagious diseases around the world. i) New Ebola Vaccine Proved to be 100% Effective: None of the nearly 6,000 individuals vaccinated with rVSV-ZEBOV in Guinea, a country with more than 3,000 confirmed cases of Ebola, showed any signs of contracting the disease. j) Eradicating Polio: The World Health Organization has announced that it expects to fully eradicate polio worldwide by Early 2017. I am personally convinced that we are on the verge of significantly impacting human longevity. At a minimum, making “100 years old the new 60,” as we say at Human Longevity Inc. This year, hundreds of millions of dollars were poured into research initiatives and companies focused on extending life. Here are five of the top stories from 2016 in longevity research: a) 500-Year-Old Shark Discovered: A Greenland shark that could have been over 500 years old was discovered this year, making the species the longest-lived vertebrate in the world. b) Genetically Reversing Aging: With an experiment that replicated stem cell-like conditions, Salk Institute researchers made human skin cells in a dish look and behave young again, and mice with premature aging disease were rejuvenated with a 30% increase in lifespan. The Salk Institute expects to see this work in human trials in less than 10 years. c) 25% Life Extension Based on Removal of Senescent Cells: Published in the medical journal Nature, cell biologists Darren Baker and Jan van Deursen have found that systematically removing a category of living, stagnant cells can extend the life of mice by 25 percent. d) Funding for Anti-Aging Startups: Jeff Bezos and the Mayo Clinic-backed Anti-Aging Startup Unity Biotechnology with $116 million. The company will focus on medicines to slow the effects of age-related diseases by removing senescent cells (as mentioned in the article above). e) Young Blood Experiments Show Promising Results for Longevity: Sakura Minami and her colleagues at Alkahest, a company specializing in blood-derived therapies for neurodegenerative diseases, have found that simply injecting older mice with the plasma of young humans twice a week improved the mice’s cognitive functions as well as their physical performance. This practice has seen a 30% increase in lifespan, and increase in muscle tissue and cognitive function. I’ve increasingly become confident and passionate about stem cells, the regenerative engine of the body, to help cure disease and extend the healthy human lifespan. I previously wrote about stem cells and the incredible work from Dr. Robert (Bob) Hariri here. Below are my top three stories demonstrating the incredible research and implications for stem cells in 2016: a) Stem Cells Able to Grow New Human Eyes: Biologists led by Kohji Nishida at Osaka University in Japan have discovered a new way to nurture and grow the tissues that make up the human eyeball. The scientists are able to grow retinas, corneas, the eye’s lens, and more using only a small sample of adult skin. b) Stem Cell Injections Help Stroke Victims Walk Again: In a study out of Stanford, of 18 stroke victims who agreed to stem cells treatments, seven of them showed remarkable motor function improvements. This treatment could work for other neurodegenerative conditions such as Alzheimer’s disease, Parkinson’s and Lou Gehrig’s disease. c) Stem Cells Help Paralyzed Victim Gain Use of Arms: Doctors from the USC Neurorestoration Center and Keck Medicine of USC injected stem cells into the damaged cervical spine of a recently paralyzed 21-year-old man. Three months later, he showed dramatic improvement in sensation and movement of both arms. 2016 was definitely “the year of the autonomous vehicle.” As Google, Tesla and Uber lead the charge, almost every major car company is investing heavily in autonomy. This will be one of the defining technology developments of the decade — soon we may well look back in shock that we ever let humans drive cars on their own… Here are the top nine developments in self-driving cars from the last 12 months: a) Autonomous Uber Operational in Pittsburgh: Uber's self-driving autonomous cars began picking up passengers in Pittsburgh this year. They also attempted a rollout in San Francisco. b) Uber’s Self-Driving Trucks Made a Delivery of 50,000 Beers: This year, Uber acquired autonomous truck company Otto, and the retrofitted 18-wheeler made its first delivery… 50,000 cans of Budweiser. c) Every Tesla Will Be Fully Autonomous in 2017: Elon Musk announced that all new Tesla cars will have the hardware for Level 5 autonomy. Next steps toward fully autonomous driving on public roads include refining the software and gaining regulatory approval. d) Ford Targets 2021 for Autonomous Vehicle Release: Ford announces intention to deliver high-volume, fully autonomous vehicle for ridesharing in 2021. e) GM’s First Fully Autonomous Car: The company plans to bring its fully electric self-driving cars to the masses by launching its first driverless cars on Lyft. f) Google Creates Waymo to Support Self-Driving Car Technology: Google spun out its self-driving car unit as its own separate entity called Waymo. g) Google Plans Ride-Sharing Service with Chrysler: Google will deploy a semi-autonomous version of the Chrysler Pacifica minivan by as soon as the end of 2017. h) Autonomy Will Kill Car Ownership: A former Tesla and BMW exec said that self-driving cars would start to kill car ownership in just five years. John Zimmer, the cofounder and president of Lyft, said in September that car ownership would "all but end" in cities by 2025. i) Self-Driving Tractors Hit Farms: The self-driving tractors can deliver faster, more precise results than their human-controlled counterparts. Quadcopters and multicopters big and small made huge strides in 2016. We are headed towards a world where autonomous drones will image the world at millimeter resolution, deliver products and packages, and transport humans to remote areas that were previously inaccessible by roads. Here were the top six drone and “flying car” developments this year: a) Amazon Prime Air Made Its First Delivery: Amazon’s drone delivery program “Prime Air” made its first delivery in the U.K. this year. Expect a much bigger rollout in 2017. b) The 7-11 Convenience Store Leads: Convenience store 7-11 made 77 drone deliveries this year, beating Amazon by a long shot. c) Mercedes Commits $500M to Drone Delivery: Mercedes-Benz vans and drone tech startup Matternet have created a concept car called a Vision Van. The van’s rooftop serves as a launch and landing pad for Matternet’s new M2 drones. d) Larry Page Funding Flying Cars: Reports this year suggest Google cofounder Larry Page has been personally funding a pair of startups devoted to creating flying cars. He has purportedly put over $100 million into the ventures. e) 1,000 Organ Transplant Deliveries from Drone Ordered: Last year we saw Chinese company eHang announce the first human-carrying drone. Recently, United Therapeutics CEO Martine Rothblatt announced a deal to fund 1,000 retrofitted eHang drones to provide organ deliveries to transplant patients, as part of Rothblatt’s Manufactured Organ Transport Helicopter (MOTH) system. f) Uber Launched Its Elevate Program: Global transportation giant Uber announced its plans to enter the “flying car” service arena by publishing a massive whitepaper this year detailing its plan to launch an “on demand aviation” service called Uber Elevate. Artificial Intelligence (AI) is the most important technology humanity will ever develop. I believe AI is a massive opportunity for humanity, not a threat. Broadly, AI is the ability of a computer to understand your question, to search its vast memory banks, and to give you the best, most accurate answer. AI will also help humanity fundamentally solve its grandest challenges. You may think of early versions of AI as Siri on your iPhone, or IBM’s Watson supercomputer, but what is coming is truly awesome. Here are 10 of the most important stories for the past year: a) NVIDIA Revealed a Deep-Learning Computer Chipset: The Tesla P100, Nvidia’s newly announced 15-billion-transistor chip, is designed specifically for deep-learning A.I. technology. Hardware advances like this are rapidly accelerating AI developments. b) $5M IBM Watson AI XPRIZE: The XPRIZE Foundation and IBM Watson, in partnership with TED, announced a $5M purse for the team able to develop an AI that can collaborate with humans to solve grand challenges. The top three teams will compete on the TED stage in the spring of 2020. c) AIs Can Read Your Lips: A new AI lip reader out of Oxford called LipNet was built to process whole sentences at a time. LipNet was 1.78 times more accurate than human lip readers in translating the same sentences. d) AI’s Predict Election Better Than Humans: MogIA, an AI system developed by an Indian startup, correctly predicted the outcome of this year’s elections. It based its analysis on 20 million data points from platforms such as Google, Twitter and YouTube. e) AI System Beats 500-to-1 Odds, Predicts the Kentucky Derby Trifecta: A startup called Unanimous AI built a swarm system in which individuals within a group influence each other’s decision making. The swarm correctly predicted the top four finishers—known as a superfecta—beating 540 to 1 odds. f) Microsoft Speech Recognition Tech Scores Better Than Humans: Microsoft's new speech recognition technology is able to transcribe conversational speech as well as (or even better than) humans. The technology scored a word error rate (WER) of 5.9%. g) AI-Written Novel Passes 1st Round of Literary Award: Titled ‘The Day A Computer Writes A Novel,’ the short story was a team effort between human authors, led by Hitoshi Matsubara from the Future University Hakodate, and, well, a computer. h) AI Saves Woman’s Life: Reports assert that Japanese doctors have, for the first time in history, used artificial intelligence from IBM’s Watson system to detect a rare type of leukemia, helping to save a patient's life. i) AIs Beat Human Pilot in Air Combat: Retired United States Air Force Colonel Gene Lee recently went up against ALPHA, an artificial intelligence developed by a University of Cincinnati doctoral graduate in a high-fidelity air combat simulator. The colonel lost to the AI. j) Deep Mind Beats World’s Go Champion: The Go-playing AI “AlphaGo” from Google’s DeepMind beat the reigning Go world champion, winning the five-game series 4-1 overall. This is a major achievement in the field of AI and deep learning. This year saw a number of fundamental achievements in physics, as well as a number of notable discoveries in our quest to explore the cosmos. Here are the top three stories for your consideration: a) Gravitational Waves Confirmed: After decades of searching, scientists have succeeded in detecting gravitational waves from the violent merger of two massive black holes. b) Evidence Found for Planet Nine: This year, more evidence arose suggesting there is, in fact, another giant, icy planet circling at the edges of our solar system. c) Earth-Size Planet Around Proxima Centauri: A new planet that bears striking similarities to our own planet prompts remarkable inroads into the study of space. This also brings a new area to search for the possibility of extraterrestrial life. We are living during the birth of the commercial space era, driven by passionate billionaire backers. Companies like SpaceX, Blue Origin, Planetary Resources and various teams competing for the Google Lunar XPRIZE are building commercial rockets and spacecraft to explore the cosmos. It is an incredibly exciting time for commercial space—here are the top four developments from the past 12 months. a) Bezos Announced ‘New Glenn’: Jeff Bezos announced a massive new reusable rocket family in development for his private spaceflight company Blue Origin. The rocket, called New Glenn, will be used to launch satellites and people into space, according to Bezos. b) Four Companies Sign Private Contracts to Fly to Moon In 2017: The teams are competing to win the $20 million Google Lunar XPRIZE to become to the first private team to land a spacecraft on the moon. The companies are: Moon Express, SpaceIL, Synergy Moon and Team-Indus. c) Musk Announces Mars Plans: SpaceX founder Elon Musk said he will put a person on Mars by 2025. There are four key things we will need to get there: full reusability, refueling in orbit, propellant production on Mars, and a propellant that works. d) Breakthrough Starshot Project Targets Interstellar Travel: Theoretical physicist Stephen Hawking and Russian billionaire Yuri Milner announced their collaborative venture “Breakthrough Starshot” — a $100 million attempt to make an interstellar starship. What a past 12 months! Image Credit: NASA Earth Observatory

This guitar-playing robot performs American folk music

$
0
0

Inspired by a statement written on Woody Guthrie’s guitar, This Machine Kills Fascists (TMKF) is an Arduino Mega-based, guitar-playing robot that performs traditional American folk music on a portable stage. Sheet music with the song lyrics are printed and left on the benches set up in front of the stage, while audience members are encouraged to sing along to the tunes. Developed by engineer Dustyn Roberts, artist Troy Richards, and designer Ashley Pigford, TMKF is combines the analog tradition of folk music and digital technology of robotics. Our project is inherently positive and seeks to bring people together through music. It uses a strategy of generating empathy and goodwill with an artificial intelligence to make us ask questions of the kind of community we may or may not be making with actual humans. With TMKF we hope to create a compelling experience that starts conversations. You can read more about TMKF here, and see an early test of the strumming robot below! (Photos:  Dustyn Roberts / Troy Richards) This entry was posted by Arduino Team on Thursday, January 5th, 2017 and is filed under Arduino, Featured, Mega. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. You must be logged in with your Arduino account to post a comment. How to control Arduino board using an Android phone Arduino IDE 1.6 is released! Download it now DIY less-expensive Thermal imaging camera Send in the clones My open-source, do-it-yourself cellphone (built with Arduino). Welcome Arduino Yún – the first board combining Arduino with Linux A low-cost robotic hand (tutorial) mirroring your own fingers Microsoft and Arduino: new partnership announced today Nice drawings of the Arduino UNO and Mega 2560 Arduino IR Remote Control David Cuartielles (Spanish) MAKE: Blog’s Arduino archive Tom Igoe’s PComp Site

Breathalyzer Distinguishes Among 17 Diseases at Once

$
0
0

Imagine talking to your best friend on the phone, when suddenly a text message from your doctor pops up. A plug-in module on your smartphone has detected an unusual chemical pattern in your breath, and you need to come in to be evaluated for early signs of cancer. Physicians have been detecting disease from the smell of breath, urine, and feces for over 2,000 years. Illness can change the metabolism in our bodies, causing our cells to release volatile organic compounds (VOCs), molecules that travel through the bloodstream to the lungs and are exhaled. People with early stage cancer, for example, breathe out different concentrations of VOCs than healthy individuals. In dozens of studies, scientists have detected VOCs in the breath of patients using laboratory techniques like gas chromatography and mass spectroscopy. Now, a team of researchers, led by engineer Hossam Haick of the Technion−Israel Institute of Technology in Haifa, reports its latest advances with a simpler, more sophisticated method for detecting disease in the breath. Called the “Na-Nose,” this artificially intelligent nanoarray relies on gold nanoparticles and carbon nanotubes to diagnose and classify 17 different diseases based on a single human breath. In a December study published in ACS Nano, the researchers for the first time demonstrate clinical proof that diseases have unique chemical patterns, or “breathprints,” that distinguish them from one another. In the study, the Na-Nose used those patterns to distinguish among diseases with 86 percent accuracy, and showed potential to diagnose more than one disease at once. Haick began developing the device 10 years ago when he joined the faculty at Technion after completing a postdoc at the California Institute of Technology. The screening tool is made up of two parts: a white, desktop box with tube into which a person exhales, sending his or her breath into an array of sensors; and an attached computer with machine-learning software trained to recognize patterns from those sensors. The array consists of thin layers of either gold nanoparticles or carbon nanotubes, each coated with organic ligands—sticky molecules that bind compounds in our breath. When VOCs in the breath bind to the ligands, it changes the electrical resistance between the nanoparticles or nanotubes, and that signal is sent to a computer. There, pattern-recognition software determines if the signal corresponds with a known chemical signature of a particular disease. The lab has trained the device on over 23 illnesses, teaching it to discriminate between a healthy individual and an individual with one of these catalogued diseases. But “that’s the easy part,” says Haick. Next, his team took the device into clinics, testing on over 8,000 patients to teach the software to discriminate between disease and confounding factors, such as contamination, age, gender, background disease (such as obesity or diabetes) and geography. And it worked: Last year, for example, the team found that the tool could detect gastric cancer in a blinded test of patients with 92- to 94-percent accuracy. The current study, for the first time, used the Na-Nose to detect and discriminate among 17 different diseases in the breath of 1,404 individuals across five countries. “For every disease, there is a unique fingerprint throughout exhaled breath that is quite distinguishable from other disease types,” says Haick. That’s not to say physicians will soon be able to diagnose everything through breath. It has been difficult to accurately diagnose prostate, breast, and bladder cancer based on exhalations, says Haick, so it is likely that other tests, such as urine analysis, may be better suited in those cases. He is now working to miniaturize the device in the hopes of adding a module onto smartphones. The project, called SniffPhone and funded by the European Union’s Horizon 2020 Program, is currently underway, but Haick expects the desktop box will reach doctor’s offices much sooner. When it does, he hopes the Na-Nose will help doctors catch signs of cancer in people long before other types of tests would be able to. In clinical studies, it was able to discriminate among stages of gastric cancer and lung cancer as early as stage one. “We aim to catch disease at an early stage, where we can increase the survival rate,” says Haick.

Andra Keay on robots crossing the chasm

$
0
0

In this week’s Design Podcast, I sit down with Andra Keay, managing director of Silicon Valley Robotics. We talk about the evolution of robots, applications that solve real problems, and what constitutes a good robot. Here are some highlights from our conversation: Silicon Valley is becoming the epicenter of robotics. I've been managing the Industry Group, which started as seeing robotics as a very small and new industry and Silicon Valley, as more or less an unknown area of the robotics.I think in the last five years, that's changed significantly; now, people look to Silicon Valley to see what is happening in robotics and AI. It seems like every major company and every government now has robotics and AI on their strategic road map. That's just the measure of how things have shifted in that spectrum between research and the real world. I think it was called ‘crossing the chasm.’ Various aspects of robotics are really crossing the chasm. When we say robotics is in its early days, we've actually had a very solid industrial robotics industry for the last 50 years, but it has been not very visible—and what I would call stupid robotics these days. It's large, rigid robotics, and they've been used for manufacturing, for electronics, automobile construction, welding, and dangerous materials handling; to a lesser extent, you've seen some of this technology used in areas like mining, port handling, and some of those logistics and defense industry applications. What's changing right now is the robots of the 21st century are more affordable. They're smaller. They're more flexible, more agile, both physically and in terms of how easy they are to program, to an extent. This is very early stages, but that's where things are leading. One of the critical things is that there are now more ‘collaborative robots’; we call them collaborative robots because they are rated as safe for operation around people. So it means that instead of having a closed-cell workspace, a workspace which is very, very clearly separated between where a robot operates and where everything else happens, we can start to consider ways of integrating robots into human activity, whether that's having a compliant, safe, collaborative robot on a factory line that can be moved around or whether that's having an autonomous vehicle that's navigating. While initially this may happen more in factories where people are used to applying robots as a solution, it's starting to happen in areas that are non-factory based—areas like airports, any kind of package handling facility, retail malls, and hospitals. Hospitals are actually early adopters. One of the other areas that interests me is agriculture because if we look at where the world has big problems that need to be solved, one of the clear issues is that our population is continuing to increase. It's well over seven billion, heading toward 10 billion in the next 10 or so years, and it's not just that the population is increasing, but the demands that we're making on our food production resources are increasing beyond the population increase. Everybody is predicting that we need to double food production in this next generation. So, by the year 2040, how do we double the world's food production when we can't double the acreage? That's just not possible. In fact, we're losing arable land as the population increases because cities tend to be built on the exact same areas that are fertile and accessible.We're increasing our demand for protein in particular, which requires even more land to grow, to tend, or to harvest, and here's the other thing: we're losing farmers. Particularly coming from New Zealand and Australia, I'm very aware that we have one of the highest rates of urbanization in the world, but this is being realized around the world now. Australians almost completely live in cities these days, and the average age of a person on the land is looking at retirement. They're within five years of retirement age, and there is no replacement. Most farmers have sent their kids to university, and most of them don't want to go back to a backbreaking manual labor job. We've also lost, certainly in Australia and New Zealand, access to cheap, seasonal labor. So, I see shades of this same problem replicated in most every country around the world. The population is increasingly urban. ... If we look back at the spread of the automobile, it took 30 years to reshape our cities and our suburbs, and for the ecology of the car, the social ecology of the car, to come into place. It changed jobs. It changed culture. It changed law. It changed the infrastructure. So, we're at the start of the rollout of robotics. How can we do the best we can to see things rolling out in the best pathway? I've been tracking groups that look at the ethics and philosophy of robotics and also discussions around law and standards, and it seems to me that valuable as each of those groups are, they're often acting after the fact. Design is the field that operates ahead of time, as it were. So for me, this is the most fruitful area to look at how to get the best possible robots out in the world today. How do we engage with robotics as it's being built in the earliest stages? I think the design community can play a very important role there. Let’s start right at the very high-level, top-down law that is the first one people will think of, which is that a robot should not be designed as a weapon.Secondly, robots should comply with existing law, including privacy law. What follows for me from this is the third one, that robots are products, and as such, they should be safe, reliable, and not misrepresent their capabilities. I think that is actually a very significant one that we're seeing a lot of companies stretch way too much at the moment. The fourth one is this more sophisticated argument that robots are manufactured artifacts and they convey the illusion of emotion and agency, and that should not be misused to exploit us. There are many situations in which that could be misused. Now, the final one is that it should be possible to find out who is responsible for a robot. Now that gets difficult. That seems very obvious, but it does get difficult. Who does own or control a robot? Is it the software? Is it the hardware? Is it the person who's using it? Is it the person who built it? Each of these laws or guidelines unpacks into some complex things that need to be negotiated in each situation, but it speaks toward the heart of what I think a lot of the ethical and philosophical dilemmas are about, and it points toward the fact that in many cases, we have an existing legal framework that says these are the things that society considers acceptable and these are the things that society does not. If you're following these good design guidelines, then you're building something that is going to fit into what we broadly know is considered acceptable social behavior. Unfortunately it's very easy for people to do something because it's new and it's unregulated, and then they can do something that turns out to be potentially very poor. I mean, 3-D printing weapons, there's one example. Uneducated use of drones is another example.

CES 2017: Why Every Social Robot at CES Looks Alike

$
0
0

In the middle of all of the autonomous car promises, slightly thinner and brighter televisions, and appliances that spy on you in as many different ways as they possibly can were a small handful of social robots. These are robots designed to interact with you at home. People responding to IEEE Spectrum’s live twitter feeds as we covered each announcement, pointed out that these little white social home robots all look kinda similar to each other, and they also look kinda similar to that little white social home robot that managed to raise $3.7 million on Indiegogo in September of 2014: Jibo. To show what we’re talking about (if you haven't been following along with our CES coverage, and you totally should be), here are three new social home robots (Kuri, Mykie, and Hub) that were announced Wednesday, along with Jibo for comparison. White. Curvy and smooth. Big heads on small bodies. An eye or eyes, but no ears or mouth, and no arms. A lot of design similarities with what is arguably the very first social home robot to (promise to) be commercially available (eventually): The question, though, is just why exactly these smooth roundish curvy big-headed white robots all look the way that they do. Why do they look a bit like Jibo, and why does Jibo look the way it does? “We designed a very clean, modern looking robot that’s friendly,” Jibo’s VP of Marketing Nancy Dussault-Smith told me yesterday. “I can understand why people want to have that kind of thing in their homes.” Kaijen Hsiao and Sarah Osentoski, Mayfield Robotics’ COO and CTO, told us something very similar about their robot, Kuri: “People are very picky about what goes in their homes,” says Hsiao. “It’s very hard to build something that matches everyone's decor, and the closest you can come is very minimalist and white. Also, if you want to hide sensors, windows that are transparent to IR are generally black, which is why you see robots with so much black.” The robots all tend to be smooth and curvy not just because it’s pleasing to the eye (conveying softness with organic and symmetrical shapes), but also because it's safer, especially with a robot that moves or that that you're supposed to interact with touch. And round heads are the easiest to move up and down and rotate while also concealing the mechanical joints and electronics inside. The specific proportion between the head and the body was, for Jibo, a very carefully thought out design decision, said Dussault-Smith. Jibo’s head is oversized because it’s intended to be somewhat reminiscent of the cuteness of baby animals (humans included), which have disproportionately large heads. For Kuri, practical issues also came into play: the robot needed to be a certain height in order to provide a decent view of your home through its eye camera, which helped define the size of the head and the base needed to keep the robot stable. Jibo and Kuri also have substantially different philosophies when it comes to eyes. “Our original idea was to have a small screen that had eyes, and we were doing all of these crazy things to try to hide the rest of the screen,” Osentoski told us. “We had decided early on character wise that if you show anything but the eyes on the screen, you destroy the character, because it’s not a face anymore,” continued Hsiao. “Finally, I said, ‘if we only want the screen to show eyes, why don't we just make physical eyes?’” Meanwhile, “Jibo’s one eye was a very deliberate choice,” said Dussault-Smith. “Two eyes caught you a little in the uncanny valley; it felt a little too real. One eye was still able to have that communication, but without as much of the intimidation of it being like a person.” And Jibo, of course, has a screen that can display all kinds of other information as well. The struggle to keep robots from being unconsciously anthropomorphized and then failing to live up to human-like expectations is another major driver of social robot design. This is where much of the minimalism comes from— avoiding human features as much as possible, especially around the robot’s face, helps to prevent users from feeling like that the robot they’re interacting with should respond the way a human would. At this point, robots that try too hard to seem human can only disappoint. There are some very good reasons why the robots that people like and are comfortable with tend to share design characteristics. Being white helps them fit in with other decor. Being smooth and round helps them be safe. Minimalist faces help keep expectations in check, while round heads are the simplest to build. We're going to see a lot more robots like this, especially if Kuri, Mykie, Hub, and Jibo turn out to be commercially successful. What I think is more interesting than focusing on how similar they are, is to instead look at why they’re different, and what those differences mean about how those robots will interact with us. Fundamentally, as Jibo’s Nancy Dussault-Smith points out, “what really differentiates robots is what's on the inside.” Even if all of these social home robots really did look exactly the same, they're intended to do different things in different ways. Maybe some will be more successful than others, or maybe they'll all find their own niches: none of them are for sale yet, so it's much too early to tell, but we're definitely looking forward to finding out.
Viewing all 3882 articles
Browse latest View live




Latest Images