Quantcast
Channel: 100% Solutions: robotics
Viewing all 3882 articles
Browse latest View live

The Self-Driving Car's Bicycle Problem

$
0
0

Robotic cars are great at monitoring other cars, and they’re getting better at noticing pedestrians, squirrels, and birds. The main challenge, though, is posed by the lightest, quietest, swerviest vehicles on the road. “Bicycles are probably the most difficult detection problem that autonomous vehicle systems face,” says UC Berkeley research engineer Steven Shladover. Nuno Vasconcelos, a visual computing expert at the University of California, San Diego, says bikes pose a complex detection problem because they are relatively small, fast and heterogenous. “A car is basically a big block of stuff. A bicycle has much less mass and also there can be more variation in appearance — there are more shapes and colors and people hang stuff on them.” That’s why the detection rate for cars has outstripped that for bicycles in recent years. Most of the improvement has come from techniques whereby systems train themselves by studying thousands of images in which known objects are labeled. One reason for this is that most of the training has concentrated on images featuring cars, with far fewer bikes.  Consider the Deep3DBox algorithm presented recently by researchers at George Mason University and stealth-mode robotic taxi developer Zoox, based in Menlo Park, Calif. On an industry-recognized benchmark test, which challenges vision systems with 2D road images, Deep3DBox identifies 89 percent of cars. Sub-70-percent car-spotting scores prevailed just a few years ago. Deep3DBox further excels at a tougher task: predicting which way vehicles are facing and inferring a 3D box around each object spotted on a 2D image. “Deep learning is typically used for just detecting pixel patterns. We figured out an effective way to use the same techniques to estimate geometrical quantities,” explains Deep3DBox contributor Jana Košecká, a computer scientist at George Mason University in Fairfax, Virginia. However, when it comes to spotting and orienting bikes and bicyclists, performance drops significantly. Deep3DBox is among the best, yet it spots only 74 percent of bikes in the benchmarking test. And though it can orient over 88 percent of the cars in the test images, it scores just 59 percent for the bikes. Košecká says commercial systems are delivering better results as developers gather massive proprietary datasets of road images with which to train their systems. And she says most demonstration vehicles augment their visual processing with laser-scanning (ie lidar) imagery and radar sensing, which help recognize bikes and their relative position even if they can’t help determine their orientation. Further strides, meanwhile, are coming via high-definition maps such as Israel-based Mobileye’s Road Experience Management system. These maps offer computer vision algorithms a head start in identifying bikes, which stand out as anomalies from pre-recorded street views. Ford Motor says “highly detailed 3D maps” are at the core of the 70 self-driving test cars that it plans to have driving on roads this year. Put all of these elements together, and one can observe some pretty impressive results, such as the bike spotting demonstrated last year by Google’s vehicles. Waymo, Google’s autonomous vehicle spinoff, unveiled proprietary sensor technology with further upgraded bike-recognition capabilities at this month’s Detroit Auto Show. Vasconcelos doubts that today’s sensing and automation technology is good enough to replace human drivers, but he believes they can already help human drivers avoid accidents. Automated cyclist detection is seeing its first commercial applications in automated emergency braking systems (AEB) for conventional vehicles, which are expanding to respond to pedestrians and cyclists in addition to cars. Volvo began offering the first cyclist-aware AEB in 2013, crunching camera and radar data to predict potential collisions; it is rolling out similar tech for European buses this year. More automakers are expected to follow suit as European auto safety regulators begin scoring AEB systems for cyclist detection next year. That said, AEB systems still suffer from a severe limitation that points to the next grand challenge that AV developers are struggling with: predicting where moving objects will go. Squeezing more value from cyclist-AEB systems will be an especially tall order, says Olaf Op den Camp, a senior consultant at the Dutch Organization for Applied Scientific Research (TNO). Op den Camp, who led the design of Europe's cyclist-AEB benchmarking test, says that it’s because cyclists movements are especially hard to predict. Košecká agrees: “Bicycles are much less predictable than cars because it’s easier for them to make sudden turns or jump out of nowhere.” That means it may be a while before cyclists escape the threat of human error, which contributes to 94 percent of traffic fatalities, according to U.S. regulators. “Everybody who bikes is excited about the promise of eliminating that,” says Brian Wiedenmeier, executive director of the San Francisco Bicycle Coalition. But he says it is right to wait for automation technology to mature. In December, Wiedenmeier warned that self-driving taxis deployed by Uber Technologies were violating California driving rules designed to protect cyclists from cars and trucks crossing designated bike lanes. He applauded when California officials pulled the vehicles’ registrations, citing the ridesharing firm's refusal to secure state permits for them. (Uber is still testing its self-driving cars in Arizona and Pittsburgh, and it recently got permission to put some back on San Francisco streets strictly as mapping machines, provided that human drivers are at the wheel.) Wiedenmeier says Uber's “rush to market” is the wrong way to go. As he puts it: “Like any new technology this needs to be tested very carefully.”

ABC News

$
0
0

Many prominent Republicans are not showing up. We'll get you up to speed on what to watch for from today's GOP rules panel. The convention takes places in Philadelphia July 25-28. Harry, 31, tested negative for the disease. The nominations started at 11:30 a.m. ET. org.apache.catalina.connector.CoyoteWriter@7adac7ca

Session Information

$
0
0
The Rockwell Automation TechED event is the premier training, education and networking event of the year. See the wide range of session topics and descriptions below. Review the sessions to start planning your ideal week at this year's Rockwell Automation TechED. Use the online schedule to sort by day, types of session and the categories of available sessions. Review the PDF for a complete printable version of all the sessions and schedule. Plan your day now, as sessions are filling up fast. Registration is open.

FREE Social Media Management Dashboard

$
0
0

Business Plus $34.99/month (50% off $70.99) 12 Social profiles + No Ads! Unlimited access to reports Unlimited monitored items Unlimited engagements Unlimited followers analyzed 4 team members Learn More »

ABB invests in Enbala Power Networks to co-develop cutting-edge grid software

$
0
0
ABB announced today it has invested in Enbala Power Networks, a leading developer of software for managing power distribution networks. The investment was made through ABB’s venture capital unit, ABB Technology Ventures. Today’s grid operators require digital solutions to deal with challenges posed by the increasing amount of intermittent renewable and distributed energy sources being integrated into the grid. This is driven by the need to keep the grid balanced and optimized in real-time as we accommodate more wind mills and photovoltaic panels on roof tops. ABB and Enbala are joining forces to develop a new distributed energy resource management system (DERMS), which will enable utilities, energy service companies and grid operators to efficiently manage the entire lifecycle of distributed energy resources, like solar and wind, while ensuring safe, secure and efficient operation of the electric distribution network. It will also enable more active participation from energy consumers. “We are excited to invest in and partner with Enbala, a fast-growing innovation pioneer recognized for its ground-breaking energy resource control and optimization software,” said ABB Chief Technology Officer, Bazmi Husain. “This investment will lead to a further enhancement of our digital ABB Ability offering, helping customers to maximize opportunities by leveraging distributed energy resources.” The two companies will also work on new capabilities to seamlessly integrate distributed energy resources into ABB’s microgrid solutions. “The investment from a leading technology company like ABB will drive enormous value for our customers and the distributed energy industry,” said Enbala President and CEO Arthur “Bud” Vos. “We believe this partnership will drive further innovation and operational integration, making largescale use of distributed energy a reality. We look forward to working together with ABB on this strategic initiative.” ABB (ABBN: SIX Swiss Ex) is a pioneering technology leader in electrification products, robotics and motion, industrial automation and power grids, serving customers in utilities, industry and transport & infrastructure globally. Continuing more than a 125-year history of innovation, ABB today is writing the future of industrial digitalization and driving the Energy and Fourth Industrial Revolutions. ABB operates in more than 100 countries with about 132,000 employees. www.abb.com ABB Technology Ventures (ATV) is the strategic venture capital unit of ABB that invests in high-potential industrial technology and energy companies that can benefit from ABB’s deep R&D resources, global sales channels and wide-ranging partnerships. ATV has deployed nearly $200 million across a range of technologies.

42 Hours of Ambient Sounds from Blade Runner, Alien, Star Trek and Doctor Who Will Help You Relax & Sleep

$
0
0

Back in 2009, the musician who goes by the name “Cheesy Nirvosa” began experimenting with ambient music, before eventually launching a YouTube channel where he “composes longform space and scifi ambience.” Or what he otherwise calls “ambient geek sleep aids.” Click on the video above, and you can get lulled to sleep listening to the ambient droning sound–get ready Blade Runner fans!– heard in Rich Deckard’s apartment. It runs a good continuous 12 hours. hundreds of hours in a playlist. You’re more a Star Trek fan? Ok, try nodding off to the idling engine noise of a ship featured in Star Trek: The Next Generation. Mr. Nirvosa cleaned up a sample from the show and then looped it for 24 hours. That makes for one long sleep. Or how about 12 hours of ambient engine noise generated by the USCSS Nostromo in Alien? Finally, and perhaps my favorite, Cheesy created a 12 hour clip of the ambient sounds made by the Tardis, the time machine made famous by the British sci-fi TV show, Doctor Who. But watch out. You might wake up living in a different time and place. For lots more ambient sci-fi sounds (Star Wars, The Matrix, Battlestar Galactica, etc. ) check out this super long playlist here. via Dangerous Minds Follow Open Culture on Facebook,Twitter, Instagram, Google Plus, and Flipboard and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. And if you want to make sure that our posts definitely appear in your Facebook newsfeed, just follow these simple steps Related Content: 10 Hours of Ambient Arctic Sounds Will Help You Relax, Meditate, Study & Sleep Moby Lets You Download 4 Hours of Ambient Music to Help You Sleep, Meditate, Do Yoga & Not Panic Music That Helps You Sleep: Minimalist Composer Max Richter, Pop Phenom Ed Sheeran & Your Favorites

Terrified about Trump? Quit your job, start an A-Team. We'll fund it.

$
0
0
You are a talented human with a broad set of skills. You can speak, write, pitch, and persuade. You can make a Power Point, or make art. You work every day, solving problems. You get a ton done. Now, let’s say you suddenly became terrified for the world’s future. What would you do? You can call Congress, or attend a protest. And you should. But is that it? Why stop there? Why set aside everything you’ve learned, everything you are, to be simply one more terrified person on a phone line, or marching in a street? What if you did the following, as well? Find someone you love to work with. Pick an issue area and angle. Do activism full-time using every connection, skill, tool, and trick at your disposal—until you win. Our goal: to convince you to take exactly these steps, give you a playbook, and fund you. This is a proposal for defending the world. And it's a good one. If you’re feeling like the world needs defending right now, keep reading. Me? Full time activism? Yes. This may not be obvious, but it’s true: all of your powerful, hard-won skills and habits are highly transferable to the work of political action. The workday of a lobbyist, community organizer or political operative—and the skills they use—aren’t so different from those of writers, journalists, lawyers, advertisers, therapists, nurses, designers, engineers, teachers, or salespeople. It’s just language, problem solving, and dedication. If you want to change the world, that's all it takes. (Meanwhile, making this a fulfilling job that pays your bills is, while difficult, easier than you think.) We write this as two people who have been doing this work for some time. For years, we’ve been practicing the approaches that suddenly, more than ever, every issue needs: approaches that achieve surprising upset victories, against the odds. The world has been getting... weird lately. But new, massive opportunities to make it better are emerging all the time. So the world needs teams like ours, in every issue area. We need some company out here. Who are you? How are you so sure this can work? The name of our organization is Fight for the Future. Our names are Holmes Wilson and Tiffiniy Cheng. For the past 5 and a half years we’ve been toiling away on a previously somewhat obscure set of political issues facing the Internet. During that time, we helped create two of the biggest victories for grassroots power in the past decade: the 2015 victory for Net Neutrality and the 2012 victory against SOPA & PIPA. Neither was supposed to be winnable. Insiders said we were crazy to even try.
 Both were against the most powerful corporate lobbies in America: the media and pharmaceutical lobbies (MPAA and PhrMA) on SOPA/PIPA, and then the cable and telecom lobby (NCTA) on Net Neutrality. Claiming credit is dubious business. But in both cases, we're pretty sure victory wouldn't have happened without interventions we created, and many knowledgeable insiders agree. (These were team efforts, so there are several organizations and individuals for whom the same is also true—that is, that we wouldn't have won without them.) We did this with a tiny fraction of the resources and conventional power our opponents enjoyed. We're two friends from high school. There have been 5-10 people with us on our team—some of the best people we've found to do this kind of activism. We do most of our activism online, but we organize real-life protests sometimes too. We’re funded by donations (large & small) but when we started our careers we were entirely unfunded, doing it for the pure pleasure of winning. We believe our experience—of big victories on small budgets—puts us in a special position to make a proposal, to everyone on the planet trying to answer the question "Oh my god, what should I do?" If you’re looking for an answer to that question, keep reading, or sign up right now. OK, what's your plan? The plan is to describe the key ingredients for something we’re calling an Activism Team, or A-Team. First, we'll tell you what that is. And then… The plan is to convince you to start an A-Team, join one, or volunteer for one. And then... The plan is to start dozens of A-Teams focused on issue areas where humanity faces serious political challenges or opportunities. Then the plan is to help the best A-Teams find the support they need to do this full-time, going forward, for as long as it takes. You probably have some questions! You’re offering funding to do activism? Yes. We're still working out the details, but if you've got a strong 2-3 person A-Team and a target we'd give you $15,000 right now for the first month, just to see what you can do. If you make a big splash or measurable impact on your target in that time, we're pretty sure we can find you more. Interested? Sign up. Again, you certainly have more questions. What is an A-Team? Activism Teams, or A-Teams, are new, small, interdisciplinary teams focused on defending or improving some aspect of the world. An A-Team is like a special ops team for activism, with all the skills needed to create political change. A-Teams can write persuasively; speak with a strong voice online (with design, images, video, and code); size up their target’s internal power structures and weaknesses; and build strategies to rally the public, focusing collective action at the right pressure points, at the right time. When you work in politics, it’s shocking how incomplete most political pushes are, even major ones. This is easy to observe. How many major, upcoming issues have an excellent, digestible explanation online? How many pair that explanation with something simple and meaningful that the average person—once convinced—can go out and do? Google it: Obamacare, Immigration, Deportation, Medical Marijuana. Not many issues have a clear, prominent source for information and action, do they? At a time like this, is that acceptable? No way. A-Teams can rapidly fill these gaps, and that’s just the beginning. Where A-Teams really shine... A-Teams shine at building a drumbeat of stories and moments. They can organize provocative, interesting collective action that grabs the public’s attention, and ensure their target is getting pressure from all necessary angles. That's how we turned the tide against SOPA. Then they can do it again and again, riveting public & press attention and overwhelming the ability of the adversary—whether an industry lobby, or an administration—to spin a counter narrative. That's how we won net neutrality. A-Teams shine at rapid response. When a story breaks, conventional organizations respond with a tweet, an email, or a press release. But A-Teams make something epic happen, fast. Protests all around the country. Some creative symbol of support or resistance, on social media, the streets, and local news. That's how we responded when the FBI wanted an iPhone backdoor. A-Teams work great when a large number of people want something to happen (or not happen) but entrenched power, corruption, or the status quo is pushing the other way. Fueled by that shared collective intent, staring at a political barrier, A-Teams find a way through to the other side. What can such small teams really do? Anything! Seriously! As an A-Team, you pick an issue that matters, and try to win tangible, significant changes. And actually do it. The Internet has changed the game on what we can achieve. Once you go full-time—or start volunteering for people who are—you’ll be shocked at what you achieve. We promise. It doesn’t matter the issue. Social change can come from media pressure, large mobilizations, small groups of insiders, influential outsiders, righteous narratives, technological shifts, or simply the discovery and spread of some big new idea. In most cases it’s some combination of some or all of these, over a period of months or years. The cool thing is, all of these factors are things a typical human can figure out how to influence. So think of them as levers of change, levers you can learn to pull. It just takes time and dedication, focusing on a problem enough to figure out what action is going to fix it and how. This isn’t any secret; it’s how the most powerful lobbyists, advertisers, and public relations firms have worked since their inception. Even when victory looks unlikely or impossible, there’s always some scenario where the right confluence of factors will let you win. Ask around and some political veteran will tell you what those things are, while giving you a million reasons why they’ll never happen. So, ignore the second part. History always proves these “never gonna happen” predictions wrong, again and again. But pay very close attention to the first, and make it a to-do list. List all those factors, and then start pulling those levers! As you succeed, you’ll notice that reality starts shifting and the impossible starts becoming tangible. (Lobbyists for Comcast or Exxon Mobil get told “it’s never gonna happen” too, but can you imagine them just giving up?) Once you get momentum, the “never gonna happen” guys start changing their tune and then, when you’ve achieved the impossible, the same people who told you impossible will tell you a million reasons why it was inevitable, and that you had nothing to do with it. (Again, you’ll want to ignore this part too.) But, what should I do? This is a much better question, though it’s a harder one to answer. First, we’re setting up an A-Team on immigration right now. If you’re interested in joining, sign up. Since there are a million things to do, it’s usually best for A-Teams to focus on issues behind major human problems, or places where humanity is at risk of really, deeply screwing up. Think big. You have to use your instinct and play to your strengths. Where can you have the most impact? Here are just a few examples of A-Team objectives that could be crucially important right now. Healthcare / ACA Climate Immigration The Wall Corruption Racism / Fascism Police Prisons Ending the drug war Foreign policy Internet Telecom Competition Impeachment Economic populism Renewable energy If one of these interests you, or if you have another objective you’re burning to pursue, sign up. How do I win, again? Well, sign up if you’re interested in trying. We can show you how. The process is essentially creating a list of all the conditions under which victory becomes first possible, and then likely. Then, go out, hustle and make all of those things happen. If you have to work all weekend, do it. If you have to stay up late, stay up late. Sometimes you’ll have to wrack your brain to think of who you might know who can help you. That’s cool. Just remember that humanity is depending on you not to lose, so everything on your list really needs to get done in some form. This doesn’t mean you’re on your own. There will already be other activists, academics, journalists, bloggers and organizations working on your issue. Seek them out, and plug in. But don’t follow them blindly or trust them to always be right. Remember: the problem you’re facing is hard, and hasn’t been cracked yet, by these groups or anyone. All of the rhetoric and infrastructure on both sides of the debate are part of the same equilibrium—the one you’re trying to break through. So help out graciously, accept help and guidance graciously, but maintain independence. The most important thing is time. How do I make time? I have a job. One solution is to volunteer or contract part-time for an existing team. If you have specific skills and are interested, sign up. But for starting a team, this can be a tough problem. It’s a bit of a Catch-22 because nobody (including us, probably) will promise you a steady job doing activism before seeing what you can do. So you and your teammates have to figure out some way to scrape together enough time to make a splash. That said, if you think you’d be good at this or you’d like to try, be in touch. We can give you $15,000 for one month. Interested? Sign up! Often the best answer to the funding question is a personal one. If you’re young, maybe you can live for free or cheap by moving back home. If you’re in a highly paid profession, maybe you can cut your hours and still pay your basic bills. If you have savings, maybe you can cruise for a few months. Maybe you’ve always wanted to check out Thailand, or India, or build a tiny house, or try some rural or urban lifestyle where you don’t really need that much money. You don’t have to be in Washington DC or even North America to work on US issues, though time zones are an important consideration. (For example, one of us lives in Brazil right now.) The 4 Hour Work Week is a book that rubs some people the wrong way, but also has great tips and mental structures for making time for the things you really care about in life. It’s worth checking out. The cool thing about being able to self-fund your team (e.g. by living cheap and working unpaid) is that you get total independence to follow your vision. This lets you find opportunities for activism that perhaps nobody else in the world is thinking about. Too often, there’s only funding available to work on the boring stuff. Again remember: existing funding structures are part of the same equilibrium you're trying to shift. As funders ourselves we hope to be an exception, but there’s no magic answer. Finding a way to be independent, especially in the initial stages, is always best. Aren’t my personal chances of being able to do this highly influenced by all of the same unjust power structures we want to fight?!?!? Yes of course, until you change them (or free yourself from them, somehow.) But that’s what we’re talking about here! How are A-Teams sustainable? Once you start making a big impact on an issue area, there are lots of ways to get funding. If you’re operating in a space where there are existing nonprofit organizations, figure out where they get funding, make friends with them, and be helpful to them. Your costs as a tiny team are so much lower than theirs, so you can get by on much smaller grants, or just one or two large ones. Once you prove your usefulness, some coalition allies will want to keep you in the game, and they’ll likely give you pointers on funding. As you start to build large audiences, ask them to donate. And look out for individuals or organizations who support what you’re doing and might be able to give more. If you’re doing something really cutting edge, individual supporters might be the best fit at first. The world is a random place and very political people become wealthy sometimes. You'll be pleasantly surprised by the donors you find. The world isn’t short on funding. It’s short on people who can take a donation of $100,000 or $1,000,000 and actually change the world in some big way. Once you learn how, and have a track record, you'll find funding if you look. If you already have a small activism team making a big, measurable impact but you’re having problems fundraising, then definitely be in touch. We can very likely speed up your search for funding. Have a question? Email us: ateams@fightforthefuture.org Want to be a part of this? Sign up. That’s all for now. We can tell you more about where to focus and how to win. But right now you’ve a choice to make. You can keep calling Congress, and keep showing up at local vigils when something disastrous happens. (And you should.) But you can also take your skills, build a team, and do something much, much bigger.

ABBVoice: A Call to Action For The Internet Of Things Industry: Let's Write A Data Bill Of Rights For Cloud Customers

$
0
0

Here’s a Data Manifesto addressed to the leading companies in the global industrial Internet of Things (IoT) solutions and services industry: It’s time to give our customers the reassurance of an IoT Data Bill of Rights. We want our customers to feel safe in the cloud (no matter who provides that cloud). It’s better for their business, and better for ours. In the $11+ Trillion Internet of Things, Services, and People marketplace, the way to unlock value is to know as much as possible about our customers’ problems. The improved visibility of cloud connection lets us provide network-effect solutions and upgrades that accelerate their success and make us all smarter. But as we’ve all seen, customers are wary of entrusting their most precious resource – proprietary data – to the cloud. In the current uncertain environment, it’s hard to blame them. It would benefit our entire ecosystem for us as an industry coalition to reassure our customers by writing and adopting an IoT Data Bill of Rights – enshrined in our contracts with them – that states in plain standard language: what data we gather from our customers why we need it how we secure it (via technology and policy) how they benefit from our practices and what we’ll do with their data if they choose to stop being customers The IoT Data Bill of Rights should not be written by lawyers. It should be easy to understand. Why am I calling on operational technology and information technology companies to form an IoT Data Bill of Rights coalition? Because, with our digital solutions and services platforms, hosted applications, and monitored remote-connected systems, we are the companies that operate data-driven cloud solutions for multiple global corporate customers. I hope the cloud such technology providers as IBM (Watson) and SAP (HANA) will join our coalition, but when it comes to data, cloud and technology providers are not necessarily connected directly to end users. They are intermediaries. We are the ones customers ultimately entrust with their data, and we are the ones who owe these customers an ironclad promise outlining common practices, rights, and expectations. There have been attempts to write a consumers’ “Users’ Data Manifesto,” and jurisdictions around the world have developed regulatory frameworks to protect individual privacy – notably, the European Union’s General Data Protection Regulation (GDPR), which goes into effect in May 2018. But surprisingly, there is no IoT business Data Bill of Rights. Now is the time for our industry to start developing one. In format and concept, the IoT Data Bill of Rights could draw inspiration from such things as the U.S. government’s highly effective Airline Passengers’ Bill of Rights, which protects airline passengers from long waits on the tarmac, hidden fees, being kept in the dark about reasons for delays, no access to water or lavatories, and bag-check fees for luggage that ends up lost, among other guarantees. In a more data-centric example, the coalition might incorporate aspects of the U.S. Health Insurance Portability and Accountability Act (HIPAA), which requires healthcare companies that collect data to design IT systems that separate patients’ identity data from measurement data. In medicine, there’s obvious universal benefit in sharing measurement data on such things as drug trials and clinical best practices. But divulging the identity of individual patients violates their privacy, so that data is kept separately. In our world, there’s identity data, measurement data, and a third category: insight data generated by machine learning and AI across industries that helps us provide better service to our customers in those industries. Identifying these three types of data and what we do with it might be a good place to start the IoT Bill of Rights. For example, one “Article” of the IoT Bill of Rights could explain the three kinds of data, and say that if customers take their business elsewhere, we can only promise to delete the first two (identity and measurement). What we can’t delete is the insight data. Insights are generated across hundreds of similar companies in the same business, and it’s impossible to extract the data droplets of one company from the entire lake of insights. Another “Article” could deal with issues that arise as our products get smarter – robots, cars, entire automation systems. Companies in our space could promise to list all sensors in each product, what kind of information the product is acquiring, and what we will do with that information. Our products are tracking us and listening in. In the B2C world, Teslas know what routes we drive, and when we use or disengage the autopilot. Alexa and Google Home listen to what we’re saying. Do they report it to their makers? In IoT, my company’s YuMi collaborative robots (or “cobots”) could soon have speech sensors so they can receive and follow verbal commands. If the YuMi doesn’t understand a command, that speech file would be sent back to ABB for processing. It’s only fair to give customers data sheets for “smart” products like YuMi that go beyond today’s specs on size and energy consumption to specify explicitly what YuMi is sensing, and what happens to that data. Customers can then choose the right products for their needs and sensitivities. If they don’t want a semi-sentient YuMi, they can opt for one that doesn’t hear or speak. I’ve suggested a few preliminary ideas simply to get the conversation started. I hope our peers in the IoT industry will raise hands and say, “Yeah, let’s work on this,” and offer criticism, insights, additions, or deletions. Every company in our industry is generating cutting-edge cloud ideas. If their experience with customers is like ours, the time has clearly come to reassure customers and generate trust by setting consistent, universal expectations, and keeping our promises. I’ll be reaching out to many of you in the near future, and hope we can move forward together on an IoT industry Data Bill of Rights. Guido Jouret is Chief Digital Officer of ABB, a global technology company in power and automation that enables utility, industry, and transport & infrastructure customers to improve performance while lowering environmental impact. The ABB Group of companies operates in 100 countries and employs 132,000 people.

SoftBank invests $5 billion into Chinese ride-sharing company Didi Chuxing

$
0
0

SoftBank, the giant telecom company, is venturing out into the world of robotics and transportation services. DealStreet Asia said that SoftBank is trying to transform itself into the 'Berkshire Hathaway of the tech industry' with the recent launch of a $100 billion technology fund. First SoftBank bought Aldebaran, the maker of the Nao and Romeo robots, and redirected them to produce the Pepper robot which has been sold in the thousands to businesses as a guide, information source and order taker, then bigger partnerships with Foxconn and Alibaba to manufacture and market Pepper and other consumer products, and most recently to establishing the $100 billion technology fund. Recognizing that the telecom services market has matured, SoftBank is putting their money where they can to participate in the new worlds of robotics and transportation as a service. $5 billion in Didi Chuxing, China's largest ride-sharing company, is a perfect example. Didi, which already serves more than 400 million users across China, provides services including taxi hailing, private car hailing, Hitch (social ride-sharing), DiDi Chauffeur, DiDi Bus, DiDi Test Drive, DiDi Car Rental and DiDi Enterprise Solutions to users in China via a smartphone application. Tencent, Baidu and Alibaba are big investors -- even Apple invested $1 billion. The transformation of the auto industry into one focused on providing transportation services is a moving target with much news, talent movement, investment and widely-varying forecasts. But all signs show that it is booming and growing. For more information on this subject, read the views of Chris Urmson, previous CTO of Google's self-driving car group, in my article entitled: Transportation as a Service: a look ahead. « Return to The Robot Report

Robots that Learn

$
0
0

Last month, we showed an earlier version of this robot where we'd trained its vision system using domain randomization, that is, by showing it simulated objects with a variety of color, backgrounds, and textures, without the use of any real images. Now, we've developed and deployed a new algorithm, one-shot imitation learning, allowing a human to communicate how to do a new task by performing it in VR. Given a single demonstration, the robot is able to solve the same task from an arbitrary starting configuration. Caption: Our system can learn a behavior from a single demonstration delivered within a simulator, then reproduce that behavior in different setups in reality. The system is powered by two neural networks: a vision network and an imitation network. The vision network ingests an image from the robot's camera and outputs state representing the positions of the objects. As before, the vision network is trained with hundreds of thousands of simulated images with different perturbations of lighting, textures, and objects. (The vision system is never trained on a real image.) The imitation network observes a demonstration, processes it to infer the intent of the task, and then accomplishes the intent starting from another starting configuration. Thus, the imitation network must generalize the demonstration to a new setting. But how does the imitation network know how to generalize? The network learns this from the distribution of training examples. It is trained on dozens of different tasks with thousands of demonstrations for each task. Each training example is a pair of demonstrations that perform the same task. The network is given the entirety of the first demonstration and a single observation from the second demonstration. We then use supervised learning to predict what action the demonstrator took at that observation. In order to predict the action effectively, the robot must learn how to infer the relevant portion of the task from the first demonstration. Applied to block stacking, the training data consists of pairs of trajectories that stack blocks into a matching set of towers in the same order, but start from different start states. In this way, the imitation network learns to match the demonstrator's ordering of blocks and size of towers without worrying about the relative location of the towers. The task of creating color-coded stacks of blocks is simple enough that we were able to solve it with a scripted policy in simulation. We used the scripted policy to generate the training data for the imitation network. At test time, the imitation network was able to parse demonstrations produced by a human, even though it had never seen messy human data before. The imitation network uses soft attention over the demonstration trajectory and the state vector which represents the locations of the blocks, allowing the system to work with demonstrations of variable length. It also performs attention over the locations of the different blocks, allowing it to imitate longer trajectories than it's ever seen, and stack blocks into a configuration that has more blocks than any demonstration in its training data. For the imitation network to learn a robust policy, we had to inject a modest amount of noise into the outputs of the scripted policy. This forced the scripted policy to demonstrate how to recover when things go wrong, which taught the imitation network to deal with the disturbances from an imperfect policy. Without injecting the noise, the policy learned by the imitation network would usually fail to complete the stacking task. If you’d like to help us build this robot, join us at OpenAI.

ETH Zurich's Omnicopter Plays Fetch

$
0
0

Most aircraft are designed to be very good at going upwards, and also not bad at going forwards, with some relatively small amount of thought given to turning left and right. Thanks to gravity, downwards is usually taken care of. Even aircraft designed to hover, like helicopters and quadrotors, have preferential directions of orientation and travel where their particular arrangement of motors and control surfaces makes them most effective.  ETH Zurich’s Omnicopter goes about flying in a totally different way. With eight motors oriented in all directions, the Omnicopter doesn’t have an up or down or front or back: It can translate and rotate in any direction, letting it play a very skilled game of fetch. We have developed a computationally efficient trajectory generator for six degrees-of-freedom multirotor vehicles, i.e. vehicles that can independently control their position and attitude. The trajectory generator is capable of generating approximately 500,000 trajectories per second that guide the multirotor vehicle from any initial state, i.e. position, velocity and attitude, to any desired final state in a given time. In this video, we show an example application that requires the evaluation of a large number of trajectories in real time. There are two particularly cool things about this video, I think. The first is how the Omnicopter is able to keep the net stationary while making the catch, even if the rest of its body is still in motion. This is only possible with the Omnicopter, because of how translation and rotation are decoupled from each other: A quadrotor configuration can’t do it, because it has to rotate itself in order to control translation (it tilts to move sideways, in other words).  The second cool thing is at the very end, where the Omnicopter returns the ball by rotating in place so that the ball drops out of the net. That kind of performance makes me wonder whether Omnicopter-like designs are the future of aerial manipulation, as opposed to what we’re used to seeing, which are multi-DoF arms stapled to helicopters or quadrotors. The latter approach essentially uses the UAV to provide an (inevitably shakey) mid-air platform from which an arm can operate, making for a very complex and not particularly stable system that can have issues with accuracy and torque. With an Omnicopter, on the other hand, it seems like you could just stick a gripper onto an arbitrary face of it, and then have the entire robot serve as an actuator. It might not be able to do everything a multi-DoF arm can do, but I bet it would be a lot easier to manage. At least, when you’re able to externally localize with a motion-capture system like in the fetching video (not that that’s cheating or anything). The fetching work comes from Dario Brescianini and Raffaello D’Andrea at the Institute for Dynamic Systems and Control (IDSC), ETH Zurich, Switzerland. It doesn’t look like there’s a specific paper yet, but for more on the Omnicopter, you can read this paper from ICRA 2016. [ ETH FMA ] Thanks Markus!

KiCAD Best Practices: Library Management

$
0
0

One common complaint we hear from most new KiCAD users relates to schematic and footprint libraries. The trick is to use just one schematic symbol and footprint library each with your project. This way any changes to the default schematic libraries will not affect your project and it will be easy to share your project with others without breaking it. I’ve spent some time refining this technique and I’ll walk you through the process in this article. We have covered KiCAD (as well as other) Electronic Design Automation (EDA) tools several times in the past. [Brian Benchoff] did a whole series on building a project from start to finish using all the various EDA packages he could lay his hands on. No CAD or EDA software is perfect, and a user has to learn to get to grips with the idiosyncrasies of whichever program they decide to use. This usually leads to a lot of cussing and hair pulling during the initial stages when one can’t figure out “How the hell do I do that?”, especially from new converts who are used to doing things differently. Read on to learn the best practices to use when using KiCAD and its library management. KiCAD keeps schematic symbols and component footprints in separate libraries and you need to link a symbol to a footprint using one of several different methods. This puts off a lot of folks, but it works quite well once you get used to it. In the old days before computers, most designers would first quickly draw out a schematic, then create a “bill of materials” where they flesh out the specifications of the components to be used. This would then help them to choose the component footprints, leading to the board layout phase. KiCAD tries to follow this work-flow. Here’s a typical folder structure I use to organize a KiCAD project, having refined this technique over many years of working with the software. doodad ↳3d_models    // .STEP and .WRL model files for all footprints ↳datasheets    // data sheets for components used ↳gerber    // final production files ↳images    // SVG images and 3D board renders ↳lib_sch    // schematic symbols ↳lib_fp.pretty // footprints ↳pdf    // schematics, board layouts, dimension drawings When you draw a schematic using symbols from the built-in libraries bundled with KiCAD, EESCHEMA creates a local backup library — doodad-cache.lib. Once you’re done drawing your schematic, copy this file to the ↳lib_sch folder  and rename it to doodad.lib. Next, go to ↳PREFERENCES↳Component Libraries , select “CURRENT SEARCH PATH LIST” to point to your local project folder ~/doodad/, click the ADD button at the TOP of the pop up window (NOT the Add button in the middle of the pop up), and finally select doodad.lib. It gets added below the currently selected library in the list. KiCAD reads these libraries in sequential order, so you need to select doodad.lib and bring it to the top of the list using the UP button. If there’s a symbol with the same name in different libraries, then the first instance of it gets used. At this point, you can either remove all the other listed libraries, or just ignore them as long as you ensure that every symbol you use gets added to doodad.lib and get’s called only from that file. Your project now uses just one schematic symbol library — ~/doodad/lib_sch/doodad.lib — and any changes to the default schematic libraries will not affect your project. Moreover, using the above defined folder structure, it’s easy to share your project on GitHub. When your project gets cloned or downloaded, this ensures there are no library conflicts. All of the above may change when KiCAD implements s-expression formats in EESCHEMA and schematic libraries (already implemented in PCBnew), so we’ll revisit this at that time. There’s several ways of doing this, but essentially, you select a schematic symbol, and assign it a footprint from one of the default libraries. Ever since KiCAD moved footprint libraries to GitHub, this has been a cause of heartburn for many. For one, all libraries are hosted online, and KiCAD needs to look them up every time you fire it up. For those who don’t change this behaviour, it slows down the program during startup, if you’re not connected to the web. Online libraries are a good idea because footprints stay updated, but it is a sure fire way to break your designs should one of the footprints used in your design change. This won’t show up for you immediately, because you will have to explicitly re-read the net-list and get KiCAD to replace changed footprints. But for someone else who clones your design, and their KiCAD version loads up the updated footprint, it WILL break things. The solution is to clone all the KiCAD libraries to a local location on your computer, and then point KiCAD to this location. But when you do this, it negates the advantage of having access to updated libraries. I don’t think there’s an ideal way to make it work, but here’s what works for me. I clone the GitHub libraries to my local computer, and keep them updated by regular pulls. This helps me use existing footprints or modify them to suit my requirements. But my project does not use any of those cloned libraries directly. Instead, I generate a project specific footprint library that contains all of the footprints (~/doodad/lib_fp.pretty) used in the project. Once again, this ensures that when the project gets cloned, all of the right footprints are already available without depending on external source libraries. Start by cloning (from GitHub) the KiCAD footprint repository to your computer. This is best done using the ‘Footprint Libraries Wizard‘ from within PCBnew. In my case, I have cloned it to ~/kicad_sources/library-repos. In KiCAD’s PCBnew, select ↳PREFERENCES ↳CONFIGURE PATHS and edit KISYSMOD to point to the local footprint library path (~/kicad_sources/library-repos). In some cases, additional steps may be required to make things work. Select  ↳PREFERENCES ↳FOOTPRINT LIBRARIES MANAGER and determine the location of the “fp-lib-table” file used by PCBnew. This is a text file that tells PCBnew where to look for footprint libraries – on Github, local path etc. Open this file in a text editor, and check if it uses KISYSMOD as the path. If not, do a search and replace for all instances of current path and replace it with KISYSMOD. Now, you can edit each schematic symbol, and add a footprint to it — either from within EESCHEMA, or using the stand alone Cvpcb module. Once all footprints have been assigned, make sure you re-generate the netlist before moving on to PCBnew. You can now start PCBnew and read the netlist, which dumps all the footprints in a pile on the canvas. Select the Mode : footprint icon, then context-click on any ONE footprint, select ↳Global Spread and Place ↳Spread out all footprints. This spreads out all the footprints making it easier to select and move them around. Once you’re done with your board layout, and all of your footprints are locked in, select FILE ↳Archive Footprints↳Create Library and Archive Footprints and provide the path/name to the .pretty folder in your project (~/doodad/lib_fp.pretty). This copies all the footprints used in your layout to the target folder. Then, Preferences↳Footprint Libraries Wizard↳Files on my computer↳(navigate to ~/doodad/lib_fp.pretty) and make sure you select “To the Current Project Only” before hitting Finish. At this point, you have used footprints from KiCAD’s global libraries and applied them to schematic symbols, made a netlist, imported netlist in PCBnew, placed the footprints and routed the board, made an archive of all the footprints used, and configured PCBnew to use that archive library. Next, return back to EESCHEMA, and edit the footprint association of each symbol to point to the new lib_fp.pretty folder instead of the local GitHub repository on your computer. The easiest way to do this is to open the .SCH file in a text editor and do a search/replace. In our present example, we will replace instances such as “Capacitors_ThroughHole” or “Resistors_ThroughHole” or “LEDs” with our local project library folder “lib_fp” Open the schematic one last time, save a fresh netlist, open PCBnew, read this netlist, but this time select the CHANGE option under Exchange Footprint. Your board layout will now be using footprints saved in your lib_fp.pretty folder, and changes to the KiCAD global libraries will not affect the layout. This may sound a bit convoluted in the beginning, but over time it becomes quite easy, and you can eliminate some steps as you get better. For example, I already have my own library for most of the common parts that I use, and copy these footprints before starting off on a new project. Over time, as you get better at it, you will start building your own schematic symbols and footprints from component data sheets instead of using external versions. Like I said at the beginning, it’s not perfect, and for me this process works very well. If you have comments or suggestions on making this better, chime in and let us know.

Fujitsu Liquid Immersion Not All Hot Air When it Comes to Cooling Data Centers

$
0
0

Given the prodigious heat generated by the trillions of transistors switching on and off 24 hours a day in data centers, air conditioning has become a major operating expense. Consequently, engineers have come up with several imaginative ways to ameliorate such costs, which can amount to a third or more of data center operations.  One favored method is to set up hot and cold aisles of moving air through a center to achieve maximum cooling efficiency. Meanwhile, Facebook has chosen to set up a data center in Lulea, northern Sweden on the fringe of the Arctic Circle to take advantage of the natural cold conditions there; and Microsoft engineers have seriously proposed putting server farms under water. Fujitsu, on the other hand, is preparing to launch a less exotic solution: a liquid immersion cooling system it says will usher in a “next generation of ultra-dense data centers.”  Though not the first company to come up with the idea, the Japanese computer giant says it’s used its long experience in the field to come up with a design that accommodates both easy maintenance and standard servers. Maintenance is as straightforward to perform as on air-cooled systems, for it does not require gloves, protective clothing or special training, while cables are readily accessible. Given that liquids are denser than air, Fujitsu says that immersing servers in its new system’s bath of inert fluid greatly improves the cooling process and eliminates the need for server fans. This, in turn, results in a cooling system consuming 40 percent less power compared to that of data centers relying on traditional air-cooling technology. An added bonus is the fanless operation is virtually silent. “It also reduces the floor space needed by 50 percent,” says Ippei Takami, chief designer in Fujitsu’s Design Strategy Division in Kawasaki near Tokyo. Takami showed off a demonstration system at the company’s annual technology forum held in Tokyo this week. A cooling bath measures 90cm x 72cm x 81cm WDH, while the rack it fits into measures 110cm x 78cm x 175cm WDH. The coolant used is an electrically insulating fluorocarbon fluid manufactured by 3M called Fluornert.  A bath has a horizontal 16-rack unit space. Two baths in their racks can be stacked vertically one on top of the other, and dedicated racks holding eight baths in two rows of four are available.  “There is no limitation on the number of stacks that can be used,” says Takami. He also points out that a bath’s dimensions are compatible with regular 48-centimeter rack-width specifications. So any air-cooled rack-mountable servers can be used as long as they meet the depth requirements of the bath and when unnecessary devices like fans are removed. The scheme employs a closed bath, single-phase system in which the servers are directly submerged in the dialectic fluid. A lid is used to cover the bath to prevent evaporation.  A coolant distribution unit (CDU) incorporates a pump, a heat exchanger, and a monitoring module. The fluid captures the heat generated by the servers’ HHDs, SSDs or other devices, and transfers it via the CDU to the heat exchanger where it is expelled outside the data center by means of a water loop and cooling tower or chilling unit. The fluid is then pumped back into the bath after filtering. The monitoring system warns maintenance engineers of any abnormal conditions via a network. Takami says because the fluid protects the servers, the system can be deployed anywhere, no matter how harsh the conditions or environment may be.  No word on a price tag yet, but Takami reveals some companies are already evaluating the system. Fujitsu expects to ship the product later this year.

3 Ways Ford Cars Could Monitor Your Health

$
0
0

People can form intimate connections to their cars in the course of daily commutes, frustrating traffic jams, and liberating road trips. How would you feel, though, if your car knew detailed information about your insides, such as the regularity of your heartbeat and the amount of glucose in your blood?  Ford is experimenting with all sorts of health and wellness features for its cars and SUVs, according to Pim van der Jagt, a Ford R&D director who described some of the company’s experiments last week in a keynote talk at the IEEE Body Sensor Networks Conference.  “We’ve seen consumer spending on health and wellness going up strongly,” he later told IEEE Spectrum in an interview. That trend motivated Ford to work on health features for its cars that it hopes will excite customers and give it an edge in the competitive automotive market. As director of R&D for vehicle technologies at the Ford Research and Innovation Center in Aachen, Germany, van der Jagt oversees experiments in three different types of health features for cars, which he calls the three Bs: built-in, brought-in, and beamed-in services. He gave Spectrum an example of each. A heart monitor in the driver’s seat: In the built-in category, Ford has developed an electrocardiography (ECG) reader that’s integrated into the driver’s seat. While the ECG systems used in hospitals use electrodes attached to the skin, this contactless system records its signals through the driver’s clothes. Although they should be “normal” clothes, van der Jagt added, “not a thick winter jacket or leather coat.” With every heartbeat, an electrical signal ripples through the heart’s muscle cells. Ford’s car seat incorporates six capacitive plates to record that signal by registering the electric charge between the plates and the driver’s body, which changes slightly with every heartbeat. This technology was developed in collaboration with Steffen Leonhardt, a professor at RWTH Aachen University. Van der Jagt imagines people with heart conditions using their daily commutes as convenient checkups. He notes that heart patients can already use wearable heart monitors and apps for at-home checkups, taking the readings themselves and sending the data to their doctor or a health coach. “Instead of taking 10 minutes to do this at your kitchen table,” he says, “why not do it seamlessly while driving? Then when you get to your office the analysis is waiting in your inbox.” While the Ford engineers have proven that the technology is feasible, van der Jagt says they’re now trying to determine if there’s a market for such a feature. One data point that argues for it: Ford has estimated that one-third of its European customers will be 65 or older by 2050, and older people quite often have heart problems. What’s more, people are staying behind the wheel until increasingly older ages. “We’re considering that people over 100 will drive cars,” van der Jagt says.  Glucose-monitor data in the car’s dashboard display: This potential feature would be brought in via wearables and apps that are connected to the car via Ford Sync, which puts info on the car’s central display screen.  Diabetics have to keep a close eye on the glucose level in their blood, which fluctuate with meals and other daily activities. Low glucose can cause an immediate crisis, and the diabetic person can pass out. So diabetics are increasingly using continuous glucose monitoring systems, which typically have sensors under the skin that send data to a base station or smartphone.  Such monitoring systems could just as easily send data to the car. “Rather than picking up your phone to check the numbers that your wearable device generates, you just see it on the screen,” says van der Jagt. “It would be much safer.” In addition to drivers checking their own levels, he also imagines a mom with a diabetic toddler in the back seat who seems to be asleep—or could be unconscious due to a glucose crash. “On the screen she can get the reading,” van der Jagt says. “The child is perfectly fine, and then she can concentrate on the road again.”  A full check up while driving: Beamed-in health features are farthest out in the future, van der Jagt says, but they’ll be possible in next-gen cars that are directly connected to the Internet.  That connectivity would make in-car telemedicine a possibility, says van der Jagt: “Instead of visiting your family doctor, if you have a spare hour in the vehicle you can connect with him.” Sensors in the car could remotely record the driver’s vital signs and send that data to the doctor during the consultation. Ford is already equipping test cars with all manner of cameras and sensors to remotely measure vital signs. Body temperature can easily be measured with infrared cameras or in-seat sensors. For respiration and heart rate, Van der Jagt notes that low-intensity radar can be used; in these systems, electromagnetic waves directed at the body bounce back from different types of tissues in different ways. As the person’s heart beats and their lungs expand and contract, the radar signal changes. Van der Jagt also mentions that he recently met with the Dutch technology company Philips about the company’s vital signs camera. It measures heart rate by detecting the incredibly subtle change in skin color as blood rushes through the arteries and capillaries of the face, and measures breathing rate by recording the movements of the person’s chest. “We’re talking with suppliers to see if there’s interest in working together,” he says. While all the features that van der Jagt described to Spectrum are still speculative, it’s not too early to consider the proper tunes to play while your car gives you a thorough going-over. The Police’s "Every Breath You Take" can top the playlist.

A Danger to the World: It's Time to Get Rid of Donald Trump

$
0
0

Donald Trump has transformed the United States into a laughing stock and he is a danger to the world. He must be removed from the White House before things get even worse. Print Feedback Comment Donald Trump is not fit to be president of the United States. He does not possess the requisite intellect and does not understand the significance of the office he holds nor the tasks associated with it. He doesn't read. He doesn't bother to peruse important files and intelligence reports and knows little about the issues that he has identified as his priorities. His decisions are capricious and they are delivered in the form of tyrannical decrees. He is a man free of morals. As has been demonstrated hundreds of times, he is a liar, a racist and a cheat. I feel ashamed to use these words, as sharp and loud as they are. But if they apply to anyone, they apply to Trump. And one of the media's tasks is to continue telling things as they are: Trump has to be removed from the White House. Quickly. He is a danger to the world. Trump is a miserable politician. He fired the FBI director simply because he could. James Comey had gotten under his skin with his investigation into Trump's confidants. Comey had also refused to swear loyalty and fealty to Trump and to abandon the investigation. He had to go. Witnessing an American Tragedy Trump is also a miserable boss. His people invent excuses for him and lie on his behalf because they have to, but then Trump wakes up and posts tweets that contradict what they have said. He doesn't care that his spokesman, his secretary of state and his national security adviser had just denied that the president had handed Russia (of all countries) sensitive intelligence gleaned from Israel (of all countries). Trump tweeted: Yes, yes, I did, because I can. I'm president after all. Sign up for our newsletter -- and get the very best of SPIEGEL in English sent to your email inbox twice weekly. All newsletters from SPIEGEL ONLINE Nothing is as it should be in this White House. Everyone working there has been compromised multiple times and now they all despise each other - and everyone except for Trump despises Trump. Because of all that, after just 120 days of the Trump administration, we are witness to an American tragedy for which there are five theoretical solutions. The first is Trump's resignation, which won't happen. The second is that Republicans in the House and Senate support impeachment, which would be justified by the president's proven obstruction of justice, but won't happen because of the Republicans' thirst for power, which they won't willingly give it up. The third possible solution is the invocation of the 25th Amendment, which would require the cabinet to declare Trump unfit to discharge the powers of the presidency. That isn't particularly likely either. Fourth: The Democrats get ready to fight and win back majorities in the House and Senate in midterm elections, which are 18 months away, before they then pursue option two, impeachment. Fifth: the international community wakes up and finds a way to circumvent the White House and free itself of its dependence on the U.S. Unlike the preceding four options, the fifth doesn't directly solve the Trump problem, but it is nevertheless necessary - and possible. No Goals and No Strategy Not quite two weeks ago, a number of experts and politicians focused on foreign policy met in Washington at the invitation of the Munich Security Conference. It wasn't difficult to sense the atmosphere of chaos and agony that has descended upon the city. The article you are reading originally appeared in German in issue 21/2017 (May 20, 2017) of DER SPIEGEL. FAQ: Everything You Need to Know about DER SPIEGEL Reprints: How To License SPIEGEL Articles The U.S. elected a laughing stock to the presidency and has now made itself dependent on a joke of a man. The country is, as David Brooks wrote recently in the New York Times, dependent on a child. The Trump administration has no foreign policy because Trump has consistently promised American withdrawal while invoking America's strength. He has promised both no wars and more wars. He makes decisions according to his mood, with no strategic coherence or tactical logic. Moscow and Beijing are laughing at America. Elsewhere, people are worried. In the Pacific, warships - American and Chinese - circle each other in close proximity. The conflict with North Korea is escalating. Who can be certain that Donald Trump won't risk nuclear war simply to save his own skin? Efforts to stop climate change are in trouble and many expect the U.S. to withdraw from the Paris Agreement because Trump is wary of legally binding measures. Crises, including those in Syria and Libya, are escalating, but no longer being discussed. And who should they be discussed with? Phone calls and emails to the U.S. State Department go unanswered. Nothing is regulated, nothing is stable and the trans-Atlantic relationship hardly exists anymore. German Foreign Minister Sigmar Gabriel and Bundestag Foreign Affairs Committee Chair Norbert Röttgen fly back and forth, but Germany and the U.S. no longer understand each other. Hardly any real communication takes place, there are no joint foreign policy goals and there is no strategy. In "Game of Thrones," the Mad King was murdered (and the child that later took his place was no better). In real life, an immature boy sits on the throne of the most important country in the world. He could, at any time, issue a catastrophic order that would immediately be carried out. That is why the parents cannot afford to take their eyes off him even for a second. They cannot succumb to exhaustion because he is so taxing. They ultimately have to send him to his room - and return power to the grownups.

World's Thinnest Hologram Promises Holograms on our Mobile Phones

$
0
0

Holograms have fascinated onlookers for over half a century. But the devices for producing these holographic images have been relatively bulky contraptions, forced into their large size in part by the wavelengths of light that are necessary to generate them. Emerging technologies such as plasmonics and metamaterials have offered a way to manipulate light in such a way that these wavelengths can be shrunk down. This makes it possible to use light for devices such as integrated photonic circuits. And just this week, we’ve seen metasurfaces enable an elastic hologram that can switch images when stretched. Now, a team of researchers at RMIT University in Melbourne Australia and the Beijing Institute of Technology has developed what is being described as the “world’s thinnest hologram.” It is only 60 nanometers thick; they produced it not by using either plasmonics or metamaterials, but with topological insulators. The resulting technology could enable future devices capable of producing holograms that can be seen by the naked eye, and are small enough to be integrated into our mobile devices. Simply put, topological insulators are materials that behave like conductors near their surfaces but act as insulators throughout the bulk of their interiors. The question is, how do these materials enable the shrinking the wavelength of light so that a device for producing holograms can potentially be embedded into our mobile devices? In an e-mail interview with IEEE Spectrum, Zengji Yue, a research fellow at RMIT University and co-author of the research paper published in Nature Communications, explained that the metallic surface’s low refractive index and the the insulating bulk’s high-refractive index together act as an intrinsic optical cavity, generating multiple reflections of light inside the thin film. This enhances the light phase shift, which is when the peaks and valleys of identical light waves don’t quite match up. This enhanced phase shift creates the holographic images. Just as a quick primer, holograpy essentially operates based on the principle of interference. A hologram is the product of the interference between two or more beams of laser light. So in a typical holographic device, a reference beam is focused directly on a recording medium, while an object beam is focused on the object that then hits the reference beam on the recording medium, creating an interference pattern. In the device produced by the international team of researchers, a light source shines on the material, and the output light from the material and from the substrate has a phase difference. The phase contains the information on the contours of the original object. Human eyes and a CCD camera can capture the information and images. “Integrating holography into everyday electronics would make screen size irrelevant—a pop-up 3D hologram can display a wealth of data that doesn’t neatly fit on a phone or watch,” said Min Gu, a professor at RMIT and co-author of the research, in a press release. "From medical diagnostics to education, data storage, defense and cyber security, 3D holography has the potential to transform a range of industries and this research brings that revolution one critical step closer.” In a video below you can see the potential for such a technology. The RMIT researchers have been highlighting the idea that the material is relatively easy to make and scalable. The fabrication method is direct laser writing, a 3D printing technique, according to Yue. “A femtosecond laser manages to ablate the thin film material on a substrate quickly, producing centimeter-scale holograms for practical applications,” he added. While all of this may conjure up sci-fi images of holograms popping up from our mobile phones, there remain some pretty significant engineering challenges that need to be overcome. Yue acknowledges that a way would need to be developed for creating the light source in the smartphone. In addition, a suitable film coating for the mobile device needs to be engineered and realized. The challenge right now for Yue and his colleagues is to find a method for improving the efficiency and quality of the device they have. In the future, Yue says they will start looking at flexible holograms for wider applications.

Video Friday: Animatronic King Kong, Robot Pilot, and Giant Eyeball Drone

$
0
0

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!): NASA Robotic Mining Competition – May 22-26, 2017 – NASA KSC, Fla., USA ROS-I Asia Pacific Workshop – May 25-26, 2017 – Singapore IEEE ICRA – May 29-3, 2017 – Singapore University Rover Challenge – June 1-13, 2017 – Hanksville, Utah, USA IEEE World Haptics – June 6-9, 2017 – Munich, Germany NASA SRC Virtual Competition – June 12-16, 2017 – Online ICCV 2017 – June 13-16, 2017 – Venice, Italy RoboBoat 2017 – June 20-20, 2017 – Daytona Beach, Fl., USA Aerial Robotics International Research Symposium – June 21-22, 2017 – Toronto, Ont., Canada Hamlyn Symposium on Medical Robotics – June 25-28, 2017 – London, England Autonomous Systems World – June 26-27, 2017 – Berlin, Germany RoboUniverse Seoul – June 28-30, 2017 – Seoul, Korea RobotCraft 2017 – July 3-3, 2017 – Coimbra, Portugal ICAR 2017 – July 10-12, 2017 – Hong Kong RSS 2017 – July 12-16, 2017 – Cambridge, Mass., USA MARSS – July 17-21, 2017 – Montreal, Canada Summer School on Soft Manipulation – July 17-21, 2017 – Lake Chiemsee, Germany Let us know if you have suggestions for next week, and enjoy today’s videos. As part of DARPA’s ALIAS program, this robot arm was able to help land a Boeing 737 in a simulator: The only reason this works at all is because of how heavily automated the aircraft already is. It makes me wonder what the point of the robot arm is at all: Why not just build this stuff into the existing autopilot already, you know? [ Aurora Flight Sciences ] Earlier this year, WeRobotics and Nepal Flying Labs teamed up with the Swiss-based NGO Medair to map one of the largest landslides in Nepal. This video showcases that experience, highlighting the project from the team’s point of view, and attempting to show the true magnitude of the landslide. The video also addresses one of the core reasons for the establishment of our Flying Labs, the creation of local capacity in robotics technologies for faster disaster response, and other social good use cases. [ WeRobotics ] King Kong is coming to Broadway. And here he is, in the animatronic faux-flesh: And here’s what it will look like on stage: The giant animatronic Kong—weighing in at 1.1 metric tons and standing 6 meters tall—was built by Global Creatures in Melbourne, Australia, where the musical originally premiered in 2013. The production is now expected to open in the fall of 2018 at the Broadway Theatre. [ The New York Times ] There’s still plenty of work for nursery and greenhouse employees, but Harvest Automation helps to handle the really unpleasant repetitive heavy lifting: The little robots certainly are brisk, almost too brisk at times: [ Harvest Automation ] Jimmy Fallon demos amazing new robots from all over the world, including an eerily human robot named Sophia from Hanson Robotics that plays rock-paper-scissors. Oof. That poor butterfly. [ Tonight Show ] The world’s most poke-able giant flying eyeball explores... uh... something French: [ Aerotain ] via [ Something French ] Things that work in simulation but not on real robots: dynamic stabilization while tracking object. [ Vikash Kumar ] Coder Coded’s Pepper simulator and remote controller for Unity has just been updated: [ Coder Coded ] From Justin Tomas, in the GRASP Lab at UPenn: In this work, we address the autonomous flight of a small quadrotor, enabling tracking of a moving object. The 15 cm diameter, 250 g robot relies only on onboard sensors (a single camera and an inertial measurement unit) and computers, and can detect, localize, and track moving objects. Our key contributions include the relative pose estimate of a spherical target as well as the planning algorithm, which considers the dynamics of the underactuated robot, the actuator limitations, and the field of view constraints. We show simulation and experimental results to demonstrate feasibility and performance, as well as robustness to abrupt variations in target motion. [ Justin Tomas ] Shriya Bhatnagar, Austin Dodge, Michael Green’s final project for “Introduction to Robotics” at the University of Houston is a robot with a taste for the artistic. Salvador WALL•E is a robot arm that can paint a landscape repeatedly. In particular, it uses acrylic paints and paintbrush to reproduce a city silhouette of the Houston skyline. The arm is the OWI-535 robotic arm used in demo projects for ECE 5397. The arm has been modified with potentiometers and an Arduino Mega 2560 to allow for more accurate control. [ University of Houston ] Sugary beverage company puts large light on drone to make dangerous nighttime mountain bike riding perhaps slightly less dangerous: Here’s the behind the scenes, where they describe the massive water-cooled custom LED light system they mounted on the drone: [ Night Chase ] via [ Fstoppers ] Kuri can now locate its own charging dock and recharge itself when necessary: Mayfield has also been doing some field testing, and to get an unpredictable variety of houses, they just use Airbnb: [ Mayfield Robotics ] From Tobias Nägeli at ETH Zurich, appearing in IEEE Robotics & Automation Letters: We propose a method for real-time motion planning with applications in aerial videography. Taking framing objectives, such as position of targets in the image plane as input, our method solves for robot trajectories and gimbal controls automatically and adapts plans in real-time due to changes in the environment. We contribute a real-time receding horizon planner that autonomously records scenes with moving targets, while optimizing for visibility to targets and ensuring collision-free trajectories. A modular cost function, based on the re-projection error of targets is proposed that allows for flexibility and artistic freedom and is well behaved under numerical optimization. We formulate the minimization problem under constraints as a finite horizon optimal control problem that fulfills aesthetic objectives, adheres to non-linear model constraints of the filming robot and collision constraints with static and dynamic obstacles and can be solved in real-time. We demonstrate the robustness and efficiency of the method with a number of challenging shots filmed in dynamic environments including those with moving obstacles and shots with multiple targets to be filmed simultaneously. [ MIT News ] We did it! We have accomplished the World’s first human flight with the drone and jump at high altitude. On May 12, our 28-propeller Aerones’s drone has lifted a skydiver Ingus Augstkalns at a height of 330 metres, from where he accomplished the planned jump and landing with the parachute. Er, congrats? Personally, I’m kinda ready for the “let’s do this otherwise normal thing except WITH A DRONE” fad to be over already. [ Aerones ] via [ Engadget ] RoboCup is celebrating its 20th anniversary this year, and they’ve put together a series of videos highlighting each of the competitions that are part of the event. The first three are right here: And you’ll find the others on YouTube at the link below. [ YouTube ] via [ RoboCup ]

22-Year-Old Lidar Whiz Claims Breakthrough

$
0
0

Lidarland is buzzing with cheap, solid-state devices that are supposedly going to shoulder aside the buckets you see revolving atop today’s experimental driverless cars. Quanergy started this solid-state patter, a score of other startups continued it, and now Velodyne, the inventor of those rooftop towers, is talking the talk, too. Not Luminar. This company, which emerged from stealth mode earlier this month, is fielding a 5-kilogram box with a window through which you can make out not microscopic MEMs mirrors, but two honking, macroscopic mirrors, each as big as an eye. Their movement—part of a secret-sauce optical arrangement—steers a single pencil of laser light around a scene so that a single photodetector can measure the distance to every detail.  “There’s nothing wrong with moving parts,” says Luminar founder and CEO Austin Russell. “There are a lot of moving parts in a car, and they last for 100,000 miles or more.” A key difference between Luminar and all the others is its reliance on home-made stuff rather than industry-standard parts. Most important is its use of indium gallium arsenide for the laser diode and for the photodetector. This compound semiconductor is harder to manufacture and thus more expensive than silicon, but it emits at a wavelength of 1550 nanometers, deep in the infrared part of the spectrum. That makes it much safer for human eyes than today’s standard wavelength, 905 nm. Luminar can thus pump out a beam with 40 times the power of rival sensors, increasing its resolution, particularly at 200 meters and beyond. That’s how far cars will have to see at highway speeds if they want to give themselves more than half a second to react to events.  Russell’s a little unusual for a techie. He stands a head taller than anyone at IEEE Spectrum’s offices and, at 22, he is true wunderkind. He dropped out of Stanford five years ago to take one of Peter Thiel’s anti-scholarships for budding businessmen; since then he’s raised US $36 million and hired 160 people, some of them in Palo Alto, the rest in Orlando, Fla. Like every lidar salesman, he comes equipped with a laptop showing videos taken by his system, contrasted with others from unnamed competitors. But this comparison’s different, he says, because it shows you exactly what the lidar sees, before anyone’s had the chance to process it into a pretty picture. And, judging by the video he shows us, it is very much more detailed than another scene which he says comes from a Velodyne Puck, a hockey-puck-shaped lidar that sells for US $8,000. The Luminar system shows cars and pedestrians well in front of the lidar. The Puck vision—unimproved by advanced processing, that is—is much less detailed.   “No other company has released actual raw data from their sensor,” he says. “Frankly there area lot of slideshows in the space, not actual hardware.” We take the gizmo into our little photo lab here and our photo editor snaps a few, always taking care not to look too deeply into the window shielding the lidar’s mirrored eyes. Trade secrets, all very hush-hush. One thing’s for sure: The elaborate optics don’t look cheap, and Russell admits it isn’t, not yet. These machines are part of a total production run of just 100 units, enough for samples to auto companies, four of which, he says, are working with it as of now. He won’t say who, but Bloomberg quotes other sources as saying that BMW and General Motors “have dropped by” the California office. The cost per unit will fall as production volume rises, he says. But he isn’t talking about $100 lidar anytime soon. “Cost is not the most important issue; performance is,” he contends. “Getting an autonomous vehicle to work 99 percent of the time is not necessarily a difficult task, it’s the last percent, with all the possible ‘edge cases’ a driver can be presented with—a kid running out in front in the road, a tire rolling in front of the car.” Tall though he is, Russell is himself a prime example of a hard-to-see target because his jeans are dark and his shirt is black. “Current lidar systems can’t see a black tire, or a person like me wearing black—Velodyne wouldn’t see a 10-percent reflective object. It’s the dark car problem—no one else talks about it!” And, because the laser’s long wavelength bounces more easily off a dark-colored object than the shorter wavelengths of today’s lidars, a dark object at distance will be that much easier to detect. What’s more, only one laser and one photodetector are needed, not an array of 16  or even 64—the count for Velodyne’s top-of-the-line, $50,000-plus rooftop model. As a result, the system knows the source for each returning photon and can thus avoid interference problems with oncoming lidars and with the sun itself. But, because it hasn’t got the 360-degree coverage of a rooftop tower, you’d need four units, one for each corner of the car. That, together with the need to make the laser, photodetector, and image processor at its own fab, in Florida, means the price will be high for a while. Russell’s argument is simple: good things cost money. “The vast majority of companies in this space are integrating off-the-shelf components,” he says. “The same lasers, same receivers, same processors—and that’s why there have been no advances in lidar performance in a decade. Every couple of years a company says, ‘we have new lidar sensor, half the size, half the price, and oh, by the way, half the performance.’ The performance of the most expensive ones has stayed the same for practically a decade; all the newer ones are orders of magnitude worse.”

Hiren Mowji, Savioke at the Robot Block Party hosted by Jabil at Blue Sky

$
0
0

Find out why Loading... The interactive transcript could not be loaded. Loading... Loading... Rating is available when the video has been rented. This feature is not available right now. Please try again later. Published on Apr 16, 2017 Loading...

In the AI wars, Microsoft now has the clearer vision

$
0
0

A week ago, Microsoft held its Build developer conference in its backyard in Seattle. This week, Google did the same in an amphitheater right next to its Mountain View campus. While Microsoft’s event felt like it embodied the resurgence of the company under the leadership of Satya Nadella, Google I/O — and especially its various, somewhat scattershot keynotes — fell flat this year. The two companies have long been rivals, of course, but now — maybe more than ever — they are on a collision course that has them compete in cloud computing, machine learning and artificial intelligence, productivity applications and virtual and augmented reality. It’s fascinating to compare Pichai’s and Nadella’s keynote segments. Both opened their respective shows. But while Pichai used his time mostly to announce new stats and a new product or two, Nadella instead used his time on stage to talk about the opportunities and risks of the inevitable march of technological progress that went way beyond saying that his company is now ‘AI first.’ “Let us use technology to bring more empowerment to more people,” Nadella said of one of the core principles of what he wants his company to focus on. “When we have these amazing advances in computer vision, or speech, or text understanding — let us use that to bring more people to use technology and to participate economically in our society.” And while Google mostly celebrated itself during its main I/O keynote, Nadella spent a good chunk of time during his segment on celebrating and empowering developers in a way that felt very genuine. Having spent a few days at both events, I couldn’t help coming home thinking that it may be Microsoft that has the more complete vision for this AI-first world we’ll soon live in — and if Google has it, it didn’t do a good job articulating it at I/O this year. The area where this rivalry is most obvious (outside of the core cloud computing services) is in machine learning. Google CEO Sundar Pichai noted during his keynote segment that the company is moving from being a mobile first company to an AI first one. Microsoft is essentially on the same path, even as its CEO Satya Nadella phrased it differently. Neither company really mentioned the other during its keynote events, but the parallels here are pretty clear. The two marquee products both companies used to show off their AI prowess were surprisingly similar. For Microsoft, that was Story Remix, a very nifty app that automatically makes interesting home videos our of your photos and videos. For Google, it was Google Photos, which is using its machine learning tech to help you share your best photos more easily. Remix is a far more fun and interesting product, which garnered massive applause from the developer audience at Build, while the new Google Photos features sound useful enough, but aren’t going to blow people away. There was also nothing developers could learn from that segment. Google Lens, which can identify useful information in images, looks like it could be really useful, too (though we won’t really know until we get our hands on it at some point in the future), but it’s worth noting that Google’s presentation wasn’t very clear here and that a number of people I talked to after the event told me that they had a hard time figuring out whether this was a developer tool, a built-in feature for the Google Assistant or a standalone app. That’s never a good sign. Google also still still offers Google Goggles, an app that allowed you to identify objects around you for a few years now. I think Google forgot that even existed, as it’s sometimes prone to do. At the core of the two companies’ AI efforts for consumers are Microsoft Cortana and the Google Assistant. This is one area where Google remains clearly ahead of Microsoft, simply because it offers more hardware surfaces for accessing it and because it knows more about the user (and the rest of the world). Cortana works well enough, but because it mostly lives on the desktop and isn’t really connected to the rest of your devices, using it never comes natural. In the virtual personal assistant arena, Google actually had some interesting announcements (though things like making calls on Google Home fell a bit flat, too, simply because Amazon announced this same feature for its Echo speakers a few days earlier). The fact that it is coming to the iPhone shows that Google wants it to be a cross-platform service and its integration with Chromecast is also really interesting (but again, because Amazon already announced its version of the Echo with a built-in screen, this didn’t land with the big splash Google had surely hoped for either). None of the new Assistant features are available now, which is disappointing and follows an unfortunate trend for Google I/O announcements in recent years. With the Microsoft Graph, it’s worth mentioning, Microsoft is now building a fabric that will tie all of your devices and applications together. Whether that will work as planned remains to be seen, but it’s a bold project that could have wide-reaching consequences for how you use Microsoft’s tools, even on Android, in the future. Another topic both companies talked about at their events are virtual and augmented reality. Here, both Google and Microsoft are talking about the spectrum of experiences that sit between full augmented reality (or “mixed reality” as Microsoft calls it) and virtual reality (and at the other end of that is “real reality” in Google’s charts). With HoloLens, Microsoft has a clear lead in standalone augmented reality experiences. That’s a $3,000 device, though. Google’s current approach is different in that it want to use machine vision (combined with its Tango technology) to use phones as the prime lens for viewing AR experiences. As for VR, Google this year talked a lot about standalone headsets. Yet while it revealed a few partners, it remained vague about specs, prices and release dates. Microsoft, on the other hand, is currently focusing on tethered headsets from partners like Acer that combine some of its HoloLens technology for tracking your movements with the power of the connected Windows 10 PC. Microsoft is shipping dev kits now and consumers will be able to buy them later this year. It’s HoloLens, though, is technically miles ahead of anything Google even showed blueprints of at I/O. That’s a surprise, because Google had a lead in building a VR ecosystem thanks to its quirky Cardboard viewers, but now it feels like it’s at the risk of falling behind. Both Microsoft and Google used their events to announce relatively evolutionary updates to their flagship operating systems. Google, of course, had already pre-announced Android O and Microsoft had already pre-announced that it’ll now offer two Windows 10 releases a year, so the fact that we’ll get a new update in the fall really wasn’t a surprise. For both companies, these developer shows are high-stakes events. Google I/O, however, felt pretty relaxed this year. Indeed, it almost felt as if I/O came at the wrong time of the year for the company. There simply wasn’t all that much to announce this year, it felt, and while that would’ve allowed Google to more clearly lay out its vision, it instead squandered valuable keynote time on talking about previously announced YouTube features that few people in the audience cared about. While Microsoft admittedly has a far wider product portfolio for developers, its event had far more energy and showed a clearer vision. Microsoft, too, made sure that its event focused almost exclusively on developers (“There will be coding on stage,” a Microsoft representative warned the assembled media before the first keynote). Google’s event (and especially the main keynote) often felt like the company didn’t quite know who its audience was (developers? consumers? the press?). And Google had a “developer keynote” at its developer conference. That must have been a first. When Microsoft showed off Remix at Build, it was to tell developers that they, too, can take the company’s tools and build an experience like this. When Google showed off Google Photos, it showed consumers that they can now use its technology to quickly make photo books. Yet really interesting new feature for developers, like Instant Apps, were barely mentioned in the keynote, even though they touch both consumers and developers. Yet the fact that the addition of Kotlin as a first-class language for Android development got more applause than any other announcement at the show clearly shows what the core audience is (in the press boxes, that announcement mostly resulted in blank stares, of course). So to be blunt, I/O was relativelyboring this year. There was no new hardware, no major new developer tools, no big new consumer product, very few new tools and almost no products that developers or consumers can use right now (and not even a full name for Android O). Maybe this kind of annual cadence for developer conferences simply doesn’t work anymore now that technology moves way too fast for annual updates, but it also remains the most effective tool to bring a developer ecosystem together under one roof (or tents, in Google’s case), state your case and lay out your vision. This year, Microsoft did a better job at that.
Viewing all 3882 articles
Browse latest View live




Latest Images