Quantcast
Channel: 100% Solutions: robotics
Viewing all 3882 articles
Browse latest View live

RooBee One is an open-source SLA/DLP 3D printer

$
0
0

Aldric Negrier, a Portuguese Maker and owner of RepRap Algarve, has created an SLA 3D printer named RooBee One. Most desktop 3D printers that you’ll see in Makerspaces or advertised for home use drop material onto a bed using a hot extrusion head. The open-source RooBee One, however, employs a DLP projector along with an Arduino Mega to light up each layer in a vat of resin. This causes each layer to solidify, thus making a complete object. You can see this process at around 0:30 in the video below. RooBee One features an aluminum frame with an adjustable print area of 80x60x200 mm, with up to a 150x105x200mm build volume. Aside from the Arduino, additional electronics consist of a RAMPS 1.4 shield, a NEMA 17 stepper motor, a microstepping driver, an endstop, and a 12V transformer. Negrier also installed a fan on top of the printer to help guide the toxic vapors outside and away from the machine’s operator. This process may be unfamiliar to those used to “normal” 3D printers, as it “magically” pulls a complete part out of a bath. The project is fairly involved, but the resulting ruby-red machine looks quite impressive. You can find out how to build one on its Instructables page. This entry was posted by Arduino Team on Monday, January 2nd, 2017 and is filed under 3D Printing, Arduino, Featured, Mega. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. You must be logged in with your Arduino account to post a comment. How to control Arduino board using an Android phone Arduino IDE 1.6 is released! Download it now DIY less-expensive Thermal imaging camera Send in the clones My open-source, do-it-yourself cellphone (built with Arduino). Welcome Arduino Yún – the first board combining Arduino with Linux A low-cost robotic hand (tutorial) mirroring your own fingers Microsoft and Arduino: new partnership announced today Nice drawings of the Arduino UNO and Mega 2560 Arduino IR Remote Control David Cuartielles (Spanish) MAKE: Blog’s Arduino archive Tom Igoe’s PComp Site

Google Tasks Robots with Learning Skills from One Another via Cloud Robotics

$
0
0

Robots are using shared data to learn skills much faster.Humans use language to tap into the knowledge of others and learn skills faster. This helps us hone our intuition and go through our daily activities more efficiently. Inspired by this, Google Research, DeepMind (its UK artificial intelligence lab), and Google X have decided to allow their robots share their experiences. Sharing the learning process among multiple robots, the research team has considerably expedited general-purpose skill acquisition of robots.  Using an artificial neural network, we can teach a robot to achieve a goal by analyzing the result of its previous experiences. At first, the robot may seem to act randomly simply working based on trial and error. However, it examines the result of each trial and, if satisfactory, focuses on similar experiments during the next trials. Making a connection between each experience and the obtained result, the robot would be able to gradually make better choices. To teach a robot, a great wealth of experience must be gathered which is a quite time-consuming process. For example, in order to teach a robotic arm how to grasp objects, we may need to let the robot experience as many as 800,000 grasps. And this would be just the beginning of its learning process. Although this learning is really time-consuming, it has interesting outcomes. Robots which are designed to perform certain pre-defined actions or interact with pre-defined objects cannot easily respond to changes in the environment. However, a robot which goes through a training process develops capabilities depending on the wealth of its experience. In this case, the robot gains the capability of adapting itself to the slight variations of the environment. To rapidly train the robots, Google researchers have decided to let the robots share their experiences–– a concept also known as cloud robotics. Every robot puts its own experience on the server and takes the latest version of the training model, which is the overall result obtained by all of the robots, from the server. Effectively, the robots are teaching each other how to perform a certain task. This cloud-based approach significantly reduces the time required to train the network. In an attempt to teach robotic arms to grasp objects, Google observed that the robots have developed pre-grasp behavior. They could push objects away to isolate a certain object from a group and then grasp it. Moreover, the robots learned to treat soft and hard objects differently. The research team achieved these capabilities only through letting robots learn and not by programming them prior to interacting with objects. In this experiment, which was conducted in March, Google allowed 14 camera-equipped robots to try picking objects. The robots’ experiences were monitored via the cameras and the result was used to train the system which was based on a convolutional neural network (CNN)­ –– a particular field of machine learning. Sharing the data, the robots were able to learn much faster. Each robot was experimenting under slightly different conditions. For example, the research team slightly changed parameters such as camera position, lighting, and the gripper hardware for each robot. These intentional variations allowed the robots to find a more robust solution and adapt themselves to environment changes. However, the system was still unlikely to operate successfully with significantly different hardware or environment. In another experiment conducted recently, the Google team gives a group of robots the task of opening a door and investigates the idea of data sharing. They repeat this experiment under three different conditions: In the first experiment, the robots simply rely on reinforcement learning, or trial and error, combined with deep neural networks. The researchers apply interference to make the neural networks build up data more quickly. The server monitors the result of trials and helps the robots arrive at a better solution. It takes the robotic arms about 20 minutes to open the door for the first time. However, in three hours, they figure out how to neatly reach for the handle, turn it, and then pull to open the door. Although the robots successfully open the door, they are not necessarily building an explicit model of this task. In the second experiment, a predictive model is developed and tested. The scientists provide the robots with a tray of everyday objects. Nudging these objects around a table, the robots build a model which helps them somehow predict what might happen if they take a certain course of action. This cause and effect model is again shared between the robots. Then, the researchers use a computer interface showing the test environment to tell the robots to move an object to a certain location. The robots use their predictive model to find out how to move the object. The final experiment is designed to help robots learn directly from humans. In this experiment, the robots are physically moved by the human to reach for the door and open it. This is analyzed and converted into a neural network which forms the base of robots’ subsequent learning. Then, the researchers allow the robots to try opening the door on their own. Again, the robots are allowed to share their experiences with each other. Within a few hours, the learning process leads to even more versatile robots. Cloud robotics can provide robots with rapidly-downloadable intelligence. By employing this method, we may soon witness robots capable of learning tasks much more complicated than simply opening a door. While humans need a lot of time to grasp others' knowledge, robots would be able to put their information on a shared network and instantly acquire each others' skills.

Artificial intelligence isn't the scary future. It's the amazing present.

$
0
0

The year 2017 arrives and we humans are still in charge. Whew! The machines haven't taken over yet, but they are gaining on us. Google's DeepMind AlphaGo computer program recently beat the world champ at Go, a complex board game, while Japanese researchers plan to build the world's fastest supercomputer for use on artificial intelligence projects. It will do 130 quadrillion calculations per second, which is, um, really, really fast. Ask Siri for details. She can explain it better than we can. The essence of artificial intelligence is massive, intuitive computing power: machines so smart that they can learn and become even smarter. If that sounds creepy, you are overthinking the concept. The machines are becoming quicker and more nimble, not sentient. There is no impending threat to humanity from computers that become bored and plot our doom. HAL, the computer villain from "2001: A Space Odyssey," is fictional. Yet ... advances in the field of artificial intelligence occur at such a breakout pace they are redefining the relationship between man and machine. Computer scientist David Gelernter says the coming of computers with true humanlike reasoning remains decades in the future, but when the moment of "artificial general intelligence" arrives, the pause will be brief. Once artificial minds achieve the equivalence of the average human IQ of 100, the next step will be machines with an IQ of 500, and then 5,000. "We don't have the vaguest idea what an IQ of 5,000 would mean," Gelernter wrote in The Wall Street Journal. OK, that's a little bit creepy. A basic test of AI tolerance is your opinion of the self-driving car, which belonged to the sci-fi future a decade ago. Today you can hail one in Pittsburgh. Driverless vehicles rely in part on a form of artificial intelligence known as deep learning — algorithms that can make complex decisions in real-time based on accrued experience. Ford wants to have an autonomous truck on the roads by 2020. The great promise is that robot drivers will never make dumb mistakes at the wheel or fail a Breathalyzer test. But they could render obsolete entire professions: long-distance trucker, for example, or cabbie. Experts hoping to illustrate the potential of artificial intelligence without frightening people conjure the image of the know-it-all yet obsequious digital assistant. It will know where to buy the perfect gift, based on algorithms that understand the latest trends and your family's preferences. And oh, it noticed that you're walking funny. Is your back acting up again? At the hospital, it will analyze an MRI better than doctors can. The frontiers are limitless: analyzing stocks, managing energy use, discovering new drugs. "I think we're going to need artificial assistance to make the breakthroughs that society wants," Demis Hassabis, DeepMind's CEO, told Wired magazine. "Climate, economics, disease — they're just tremendously complicated interacting systems. It's just hard for humans to analyze all that data and make sense of it." You may already have benefited from artificial intelligence without realizing it. Several months ago, Google Translate upgraded to what it calls the Google Neural Machine Translation system. The program relies on a brainlike computational network that sifts through its database to arrive at a logical, nuanced meaning for any sentence in just about any language. Here is the old Google Translate struggling to turn a Japanese sentence of a Hemingway line back into English: "Whether the leopard had what the demand at that altitude, there is no that nobody explained." And the new Google Translate, firing its electronic neurons: "No one has ever explained what leopard wanted at that altitude." Missing an article ("the"), but otherwise perfect. Writing about Google Translate and Hemingway in a New York Times magazine article titled "The great AI awakening," journalist Gideon Lewis-Kraus pondered the significance of a machine that masters human language: It could be "the major inflection point" in the development of "true artificial intelligence." Be awed, but not afraid. Technically, computers may outthink us, but humans will always have the edge because we are more creative. After all, we built the machines. Join the discussion on Twitter @Trib_Ed_Board and on Facebook.

Bett 2017: Call for volunteers in London!

$
0
0

Arduino Education empowers educators with the necessary hardware and software tools to create more hands-on, innovative learning experience. Later this month, we’ll be exhibiting our latest STEAM program for upper secondary education at Bett 2017 in London: CTC 101 – Creative Technology in the Classroom 101. We’re looking for volunteers to join our team during the event, from staffing tables and displays, to helping with one-on-one demos, to providing technical assistance. Water and snacks will be provided, of course, and we’ve even prepared a small gift to show our appreciation at the end of your shift! Interested? Please fill out this questionnaire and we’ll get back to you soon!  When: Wednesday-Saturday, January 25-28, 2017 Location: ExCeL London, Royal Victoria Dock, 1 Western Gateway, London E16 1XL, United Kingdom This entry was posted by Arduino Team on Monday, January 2nd, 2017 and is filed under Events, Featured. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. You must be logged in with your Arduino account to post a comment. How to control Arduino board using an Android phone Arduino IDE 1.6 is released! Download it now DIY less-expensive Thermal imaging camera Send in the clones My open-source, do-it-yourself cellphone (built with Arduino). Welcome Arduino Yún – the first board combining Arduino with Linux A low-cost robotic hand (tutorial) mirroring your own fingers Microsoft and Arduino: new partnership announced today Nice drawings of the Arduino UNO and Mega 2560 Arduino IR Remote Control David Cuartielles (Spanish) MAKE: Blog’s Arduino archive Tom Igoe’s PComp Site

Imagine New York City With 3000 Taxis Instead of 13,000

$
0
0

Large-capacity ride sharing services could replace 98 percent of taxi service in Manhattan, researchers report this week in Proceedings of the National Academy of Sciences. “We could drastically reduce the number of vehicles,” with a “minor impact to users,” says Javier Alonso-Mora, a computer scientist at Delft Technical University in the Netherlands who worked on the project while at the Massachusetts Institute of Technology. Today, there are about 13,000 taxis in use in New York City every day—but by design they usually pick up and drop off a single passenger or group. Some popular transportation startups, such as Uber and Lyft, offer ride sharing options—but vehicles typically only have space for two passengers at most.  2014 research published in Proceedings of the National Academy of Sciences found that 80 percent of Manhattan taxi trips could be shared by two riders, but it didn’t take into account new riders joining after a trip has already begun. In addition, the 2014 work and other studies of ride sharing either limit the number of riders, or they don’t study the effects of letting customers choose different pick up and drop off locations from each other, Alonso-Mora says. So the real benefits for large-capacity vehicles haven’t been determined before.  Using a randomly-picked week of New York City taxi data as input, the researchers created a computer program that produces a route for ride sharing vehicles that minimizes passenger delay: both the time riders spend waiting for a ride and the delay from deviating from their route so their vehicles can pick up new passengers. The program penalized requests not completed. It works like this: After a set time interval such as 30 seconds, it checks for  a new ride request and adds the request to a queue. By considering unfulfilled requests and trips already in progress, optimization algorithms compute which passenger should be picked up by which vehicle and where each vehicle should go. The researchers tested the simulation with vehicle capacities of one (traditional taxi), two (UberPool or Lyft), four (car), and ten (minivan)—stopping because of the extra computational power needed for simulating even higher capacities. They used a maximum of two minutes of waiting with a 4-minute delay, 5 minutes of waiting with a 10-minute delay, or 7 minutes of waiting with a 14-minute delay—similar to the 5 to 10-minute wait it would take to park a car in a busy area like New York City, Alonso-Mora says. They found that it takes only 2,000 ten-passenger vehicles or 3,000 four-passenger vehicles to meet 98 percent of Manhattan taxi demand every day. (The leftover two percent would be lost because of the set delay constraint.) The mean wait would be 2.8 minutes and mean trip delay would be 3.5 minutes. “You allow drivers the possibility to make the same amount of money working less,” says Massachusetts Institute of Technology computer scientist and collaborator Daniela Rus. So, she says, instead of taking jobs away from taxi drivers, this would let the same number of workers make the same amount of money in fewer shifts. (The New York Taxi Workers Alliance did not respond to a request for comment.) Alonso-Mora believes the work is evidence that companies should expand large-capacity ride sharing options, or buses should switch to more flexible, on-demand schedules. (Which option is better isn’t quite clear yet, he adds.) Alan Erera, an industrial engineer at Georgia Institute of Technology who was not involved in the research but studies ride sharing, writes in an email: “I am very optimistic about the tremendous value that real-time ride sharing systems hold for dramatically improving roadway and vehicle fleet utilization for moving passengers.” However, “these results are optimistic, since they assume that everyone is willing to share rides with anyone else, and will sacrifice their own convenience for system optimality. In reality, some will always prefer to ride alone,” he says. He adds that the calculations don’t take into account the extra waiting time it takes for each passenger to get in and out of the vehicle and the car is idle, nor any variability in travel time, which “skews the results somewhat away from conservatism.” “Building new forms of transit by extending the Uber/Lyft model with automated vehicles and/or better ride-matching optimization,” he writes, “will actually only increase congestion and total vehicle-miles travel unless we can find approaches that ensure many riders will pool together and share trips.” Alonso-Mora says that to avoid riding with strangers, users could set a preference for that in a theoretical app. He also notes that the algorithm can be tuned to factor in extra waiting and travel time variability, but the group found only “small differences” in system performance with different travel times. To deal with another potential challenge—tariffs—drivers could estimate the savings from multiple passengers and incorporate them into calculations.  He says the next steps could be to take into account predictions of future requests to improve the algorithms. They may also analyze more cities and explore the implications of autonomous vehicles in such a system.  There are already several companies around the world offering large-capacity ride sharing services. “We absolutely can replace 98 percent of taxi service,” says Matthew George, CEO of Bridj, a Boston, Massachusetts-based startup that offers 13- to 15-seater bus ride sharing services in several U.S. cities. In March 2016, the company began partnering with the local transport authorities to provide customized public transport in Kansas City, Missouri: unionized city employees are at the wheels of its buses. He says there is a waitlist of 30 cities that want to start a similar program.  Instead of using bus stops, when a user requests a ride, algorithms consider all nearby requests for similarity. A driver goes to common pick-up and drop-off points within a five to seven minute walk of all rider starting locations and final destinations, respectively. He says right now, the average user spends about $85 a month on its services. He won’t disclose exactly how many users there are, but only that buses have gone “millions of millions” of passenger miles—about 80 percent of customers use the buses as their main daily transportation. Some use them in place of taxis—and he expects those kinds of customers to increase in 2017.

The robots are coming to CES, and we can't wait to meet them

$
0
0

There's nothing better at CES than discovering a robot on the show floor. Seriously, robots are cool. They are fun. And thanks to the newly released "Rogue One: A Star Wars Story," public sentiment toward droids is off the charts. It's unlikely the CNET team will stumble across anything as endearing as new Star Wars droid K-2SO as we scour the halls in Las Vegas. Still, there's a ton of robo-excitement in the run-up to CES 2017. At previous shows, robots have primarily been used as marketing gimmicks or demonstration props. That, says IHS analyst Dinesh Kithany, is set to change. "What we will see is more from the application point of view," Kithany said. Companies will be looking to explore what consumers can do with a robot, although the robots themselves, he added, will likely be "in the concept stages." Robots for consumers typically come in one of three categories. One group is service robots, like the Roomba vacuum and pool-cleaning bots, that perform specific tasks. Then there are social robots, like Pepper or Sanbot, that have humanoid features, play games and nag you to do everyday chores, like brushing your teeth. Fully humanoid robots stand at the top of the heap and are being readied to perform care and nursing functions, such as the picking people up to help them maintain autonomy and stay in their own homes if they are elderly or injured. The tech might still be at a nascent stage, but it's expected to take off. Shipments of home robots are set to grow from around 5 million units in 2015 to 13 million units in 2020, according to IHS's Service Robots & Drones report 2016. Behind any good robot lies good artificial intelligence. And we seen some impressive leaps forward over the past year, including Go-playing algorithms and Jarvis, Mark Zuckerberg's smart home AI butler. That's going to make robots a whole lot more appealing. Smart assistants have come a long way too and, thanks to the likes of Amazon Echo and Google Home, are now more than just voices on our gadgets. They are precursors to what we can expect to see in consumer robotics. In one CES panel, a number of robotics experts are on the docket to discuss how improvements in artificial intelligence will help robots become more useful in everyday life. And, of course, evidence of some such improvements will be on show. "We're showcasing some advanced AI technology to demonstrate that AI is a reality and available to consumers for everyday communication and entertainment," said John Rhee, general manager of Ubtech Robotics, which is set to make several announcements regarding its robotics range. Similarly, Austrian company Robart will use CES to show off its autonomous navigation software for household robots. The system, designed for service robots, will allow bots to "recognize their surroundings and communicate intuitively with the user," CEO Michael Schahpar says. "They will learn ever better and adapt to changing surroundings." Taiwan's Industrial Technology Research Institute is also set to demonstrate some essential robotic skills involving computer vision at the show. Researchers will have a robot distinguish between various chess pieces and their locations on the board, as well as between various coffee cups, their locations and fill levels. Hanson Robotics, the maker of realistic humanoid robots that feature a flexible, responsive skin called "frubber," is somebody we'll keep a close eye on at CES. A company spokesman says customers are keen to get hold of humanoid robots with lifelike facial movements. Earlier this year, Hanson showed off its Sophia robot, which is capable of performing a full range of human facial expressions. At CES, Hanson is due to give its very first public demonstration of its Professor Einstein robot. Others are similarly optimistic about this robot form factor. "2017 is the year in which we will begin to see humanoid robots become home companions," said Ubtech's Rhee. His company is responsible for creating Alpha 2, a short humanoid social robot designed to make household life easier by setting reminders and controlling smart home devices like lights and locks. If you're hoping you'll find the ultimate robot butler or a reprogrammed KX-Series security drone -- it's a Star Wars thing and we're really into Star Wars -- at CES, this probably won't be your year. Still, you can expect to see a host of new consumer robots focused on entertainment and education, especially for teaching kids to code. One example: New designs of Ubtech's Jimu robots that offer the familiar snap-together programmable creatures but with increased mobility. We'll also likely see a number of companies showcasing hardware and software brought together by AI. These projects will give us a glimpse of the skills future robots might have and that could persuade people to think about bringing one home. At the very least, we'll find some cute bots to dream about.

This 3D-printed bionic hand can replace or support a limb

$
0
0

3D-printed appendages are, as one might suspect, generally meant for those that are missing a limb. Moreover, there are many other people that might retain partial functionality of a hand, but could still use assistance. Youbionic’s beautifully 3D-printed, myoelectric prosthesis is envisioned for either application, capable of being controlled by muscle contraction as if it were a real body part. As seen in the video below, the Youbionic hand can manipulate many different items, including a small box, a water bottle, and a set of keys. Functionality aside, the movement is extremely fluid and the smooth black finish really makes it look great. The device is currently equipped with an Arduino Micro, servos, various sensors, a battery pack, and a few switches. Even the breadboard appears to be very neat, though one would suspect the final version will use some sort of PCB. You can learn more and order yours on Youbionic’s website. This entry was posted by Arduino Team on Monday, January 2nd, 2017 and is filed under 3D Printing, Arduino, Featured, Micro. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. You must be logged in with your Arduino account to post a comment. How to control Arduino board using an Android phone Arduino IDE 1.6 is released! Download it now DIY less-expensive Thermal imaging camera Send in the clones My open-source, do-it-yourself cellphone (built with Arduino). Welcome Arduino Yún – the first board combining Arduino with Linux A low-cost robotic hand (tutorial) mirroring your own fingers Microsoft and Arduino: new partnership announced today Nice drawings of the Arduino UNO and Mega 2560 Arduino IR Remote Control David Cuartielles (Spanish) MAKE: Blog’s Arduino archive Tom Igoe’s PComp Site

China’s $9 billion effort to beat the U.S. in genetic testing

$
0
0

BOSTON — Lindsay Weekes knew something was wrong as soon as her son was born. Her pregnancy had been easy. The baby was a strapping 6 pounds, 12 ounces, with thick, curly black hair like his father’s. But from the first moment Quinlan drew air, Lindsay could see he was tense, his muscles rigid. Within 24 hours, Quinlan was whisked away from their hospital to an intensive care unit at a nearby medical university. There he began a battery of tests in hopes of diagnosing his disorder, the start of a tortuous journey that has thrust the family into the center of a global economic race to push the limits of medicine. The search for an answer has taken Quinlan to the cutting edge of the emerging field: the use of genomics, the study of our DNA, to tailor health care. The United States has long been the industry’s undisputed leader, performing much of the research that first decoded our DNA about 15 years ago. But now China is emerging as America’s fiercest competitor, and it is sinking billions of dollars into research and funding promising new companies both at home and abroad — including a laboratory that handles some of the toughest cases at Boston Children’s Hospital, where Quinlan has become a favorite of the staff. Finding an answer for Quinlan and children like him relies as much on Chinese expertise as it does American ingenuity. One of the founders of the lab was born and trained in China before immigrating to the United States. Chinese company WuXi NextCODE is one of its chief investors, and researchers there use WuXi’s programs to analyze the reams of data inside our DNA. [Why treating diabetes keeps getting more expensive] Under President-elect Donald Trump, America’s relationship with China has been defined by frustration over the loss of factory jobs in the nation’s industrial heartland to the assembly lines of the world’s second-largest economy. But experts say it is the battle for dominance in innovation and science that is more likely to determine the economy of the future. “I’m very frustrated at how aggressively China is investing in this space while the U.S. is not moving with the same kind of purpose,” said Eric Schadt, director of the Icahn Institute for Genomics and Multiscale Biology at Mount Sinai. “China has established themselves as a really competitive force.” For the Weekes family, the stakes couldn’t be higher. “There’s some missing piece of the puzzle that we need to find right now,” Lindsay Weekes said. ‘Did you see that?’ Two years ago, on New Year’s Eve, Weekes and her husband were squirreled away in their tidy split-level home in the suburbs of Boston. Quinlan had spent four days in the hospital after his birth and then was sent home when doctors couldn’t pinpoint the problem. Now he was about 4 months old. Weekes looked down at Quinlan cradled in her arms and realized his lips were blue, his eyes staring blankly back at her. He was having a seizure. “Did you see that?” she called to her husband, Jaunel, who goes by the nickname “Bear.” By the time he walked over, Quinlan’s lips were once again a healthy pink. But then it happened twice more, and Weekes and Bear were riding in an ambulance with Quinlan on their way to Boston Children’s Hospital when the clock struck midnight. He didn’t leave the hospital for another month. The seizures weren’t the only problem. Quinlan had difficulty following objects with his eyes. He wasn’t rolling over. And his doctors still didn’t know why. “We didn’t have the diagnosis, so it was just treat the symptoms,” Weekes said. The family hoped genetic testing would provide an answer. The cost of sequencing DNA has dropped dramatically since researchers unraveled our biological building blocks for the first time in 2001. Estimates of the pricetag for that initial discovery range from several hundred to a few billion dollars. Decoding a genome now runs between a few hundred to a few thousand dollars, spawning a flurry of potential new applications. Experts say the technology could prove as transformational as the Internet. Pharmaceutical companies want genetic information to concoct powerful new drugs. Hospitals hope to analyze genes to personalize medical care. And doctors believe genetic data could provide the keys to understanding rare and mysterious diseases like Quinlan’s — and maybe one day even develop a cure. For China, the genomics revolution has been a chance to showcase its technical prowess as well as cultivate homegrown innovation. Over the past two decades, China transformed itself into an economic superpower through massive industrialization. But the country is now facing the limits of that model amid slowing growth, toxic pollution and the shift of manufacturing work to less-developed nations. To succeed over the next generation, China hopes to emulate Western-style entrepreneurship to transform its economy. “When they looked out on the horizon, they saw that those who defined the cutting edge of the global economy are innovation leaders,” said Denis Simon, executive vice chancellor of Duke Kunshan University in China. “For China to play a central role in world affairs, as well as to have a very competitive economy, it would have to step up its innovation game.” [To sway drug approval, patients with rare diseases turn up the heat on the FDA] What China cannot create, it appears more than willing to buy. Chinese investors — both private and government-supported — are backing American start-ups in hopes of capturing the entrepreneurial spirit. China has sunk more than $3.6 billion into the U.S. health and biotechnology sector over the past 16 years, according to an analysis by Rhodium Group, a consulting firm. Scientist and entrepreneur Ge Li is a poster child for China’s new model. Trained at Columbia University, Li was working as a laboratory scientist in Philadelphia in 2000 when he realized he could replicate his job in his home country for a fraction of the cost. His company, WuXi AppTec, which includes WuXi NextCODE, is now estimated to be worth more than $3.3 billion. Roughly 14,000 people carry out the company’s research and product development around the world. In the United States, the company has helped finance an array of biotech start-ups, including the home DNA testing company 23andMe. It tests medical devices in St. Paul, Minn., and develops biologic drugs in Atlanta. In Philadelphia, it is one of the anchors of a technology hub in the city’s Navy Yard, opening its third biomanufacturing facility there this fall. Until last year, WuXi’s largest division was listed on the New York Stock Exchange. The company is now privately owned, but speculation abounds that it will eventually go public again — on a Chinese exchange. “We’re a U.S. company in the U.S., but we’re a Chinese company in China,” said Hannes Smarason, chief operating officer at WuXi NextCODE. “We’re local in every market.” The Quinlan genome project By about six months old, most babies are sitting up and smiling, laughing and clapping. Quinlan came down with a severe respiratory virus that sent him back to the hospital, then to a nearby rehabilitation center. His seizures became more severe, and he underwent a tracheotomy to help him breathe. “You pull your hair out,” Weekes said. “I’m not a doctor by any means, and I’m sitting there trying to figure out, ‘Why is my baby doing these things?’ ” Sequencing is only the first step in what doctors call the “diagnostic odyssey.” Making sense of the resulting mountain of data is its own challenge. Unspooling just one human genome takes up roughly 150 gigabytes, the equivalent of roughly 32 DVDs. The gene responsible for Quinlan’s disorder could be hidden in any one of them. Geneticist Tim Yu is one of the founders of Claritas, the sequencing lab that handled Quinlan’s case, and he hunted through the entire library of the boy’s DNA for clues. A few years ago, a project this complex would have required getting bulky hard drives of genomic databases through the mail. WuXi NextCODE’s big breakthrough was to speed up the process by introducing the medical equivalent of an Internet search engine, able to scour roughly two dozen reference databases over the Internet to find similar mutations. The creation of vast warehouses of genetic information has raised concerns about privacy, however. Critics have questioned drug companies’ access to the databases, and there have been several well-publicized cases of researchers connecting people to DNA samples that were submitted anonymously. Marcy Darnovsky, executive director of the advocacy group Center for Genetics and Society, said that China provides few safeguards for those who face discrimination based on what may be uncovered within their DNA. “Technically it can’t be 100 percent assured that your data will remain anonymous,” she said. Smarason called WuXi’s systems “ironclad,” arguing that the data is not identifiable and is encrypted to defend against hackers. Consent is required from every person who is part of the database. “Many patients with diseases for which there is no treatment and many with rare disorders want their data shared, in order to contribute to a better understanding of their condition and to develop better drugs,” Smarason said. [To bring a divided country together, start with a little spit] Indeed, the larger the database, the better Yu’s chances of finding the gene responsible for Quinlan’s disorder. Yu looked for genes associated with Quinlan’s unique symptoms: a small head, seizures, involuntary movements and rigid muscles. WuXi NextCODE’s system found 120 that could have caused one of the symptoms. Nearly half of those genes were strong matches on both of the systems the lab uses to sequence patients’ DNA. But WuXi’s program found only six could have been passed down from parents who showed no sign of the disorder. One stood out to Yu, a clipped segment on chromosome 7, resulting in a mutation of the BRAT1 gene. “They are almost invariably bad,” Yu said. At the time, only a handful similar cases had been documented in medical literature. In all of them, the babies died within months. Racing and waiting Quinlan’s disorder now has a name: RMFSL, or rigidity and multifocal seizure syndrome, lethal neonate. But the description is no longer accurate: He has already survived much longer than the diagnosis would have predicted. Quinlan celebrated his second birthday at his grandparents’ house. He wore a T-shirt with the Sesame Street character Grover on it, and his birthday cupcakes were decorated with blue icing and the Superman logo, a Q replacing the S. “When we finally got the diagnosis, it was like a sigh of relief,” Weekes said. “We don’t know what the future is going to hold, but at least we know why.” Gene mutations similar to Quinlan’s have recently been found in a handful of other children, suggesting a broader spectrum of symptoms and offering some hope for Quinlan’s progress. One is 10 years old with only mild mental disabilities. Others are more severely compromised than he is. Yu and Quinlan’s neurologist at Boston Children’s Hospital, Heather Olson, recently published a paper expanding the definition of the disorder. Medically, the diagnosis has helped in small ways. Children with his mutation often have trouble breathing, so Quinlan is on an oxygen monitor to guard against sleep apnea. Common colds get immediate and aggressive treatment. His parents and nurses pound on his chest to help him clear his lungs several times a day. There is no cure for Quinlan. But this past spring, Chinese officials launched a $9 billion investment in precision medicine, a wide-ranging initiative to not only sequence genes, but also develop customized new drugs using that data. The funding dwarfs a similar effort announced by President Obama a year ago that has an uncertain future in Trump’s new administration. “The U.S. system has more dexterity and agility than the Chinese system,” said Simon, of Duke Kunshan University. “But the learning curve in China is very powerful, and the Chinese are moving fast. The question is not if. The question is when.” Quinlan and his family are waiting for the answers. More from Wonkblog: The disturbing thing that happens when you tell people they have different DNA Why you shouldn’t know too much about your own genes Will babies be better off if we know their genes?

Home robot Kuri is like an Amazon Echo designed by Pixar

$
0
0

Domestic robots are sort of here, with self-driving and speakers that control your smart home, but Mayfield Robotics’ Kuri could be the first real home robot, combining mobility and true interaction with approachable, friendly design. Mayfield Robotics is a startup fully owned and funded by Bosch, with a team of co-founders that have extensive experience in the field of robotics, but also in interaction design and machine learning. Their first product is Kuri, an intelligent home robot making its official debut at CES this year, with pre-orders beginning in the U.S. and a target ship date of sometime during the holidays in 2017. Kuri responds to voice input, and in this way is similar to other devices like Google Home or Amazon Echo. There’s a four-microphone array built into its compact, vaguely conical body so that it can hear you no matter where you are in the room, and it has a speaker built-in, too. But Kuri doesn’t respond with words; instead, it uses sounds, lights and its expressive eyes to communicate responses back to a user – all of which is a key part of its design. The robot also has a built-in HD camera tucked behind one eye, and a range of sensors to stop it falling down stairs or bumping into furniture. it moves on a three wheels that help it rotate in any direction, and move from room to room with easy as it follows you around the house or goes where you tell it. There’s a processor on board to handle tasks like voice and image recognition processing locally, and it’s programmable through easy tools like IFTTT to expand its feature set. The diminutive robot is only under two feet tall, and about a foot wide with a total weight of 14 lbs. Both iOS and Android apps are available to help control and interact with Kuri when you’re not using him (Mayfield genders the robot male in press materials and conversation) and it also seeks out its charging pad when its battery needs topping up. It’s tempting to think of Kuri as a rolling Echo with the ability to blink, but the robot is much more than that. Its designed to be a companion first, and an assistant second, and that meant a design process that was far different from what Amazon undertook with its smart speaker. Kuri’s movement design was led by a longtime Pixar animator, in fact, which Mayfield believes is key to making it something that people will not only be comfortable using, but that will want to interact with regularly. Each inclination of Kuri’s head, each blink of its eye, its body shape and its pacing and locomotion design were all carefully considered, with the thought in mind of building something that was not necessarily optimally efficient or functional, but that was approachable, calming and inviting. Kuri needed to inspire trust in its users, and so the design process involved eliminating any motion, sound or type of movement that would potentially unnerve its users. The Pixar DNA is apparent in Kuri’s finished product. The robot looks like something that’s leapt out of Wall-E, and the end result is a robot you want to befriend. I find myself thanking Alexa absent-mindedly when using my Echo, but I’d never consider Amazon’s virtual assistant a companion, per se; with Kuri, I find myself already coveting its affection even without owning one. Kuri is also different from things like Home or Echo because it’s designed to be its own robot; that is, at launch at least, it won’t have a long list of third-party integrations. But it will be able to control things via IFTTT; to let you check in on your house and its various rooms while you’re away using its HD camera; to read to your kid before bed; and to play games and podcasts as it tails you throughout the day. The team at Mayfield Robotics wanted very much to create a satisfying, full experience from the moment you unpack Kuri, and so the focus was not on opening up integrations from day one, but instead on building something you’ll feel has value all on its own. It’s a good strategy, and one that will hopefully help Kuri have more broad appeal at launch, since it doesn’t require connecting a whole host of third-party services to become truly functional. We’ve long anticipated a day when we welcome robots into our homes, as caregivers, assistants and service companions. Kuri is not the Jetsons’ Rosie, or R2D2, but it is a step in that direction, and one that focuses on the big elements still needed for getting us to that future – interaction design. Robots being truly welcomed in the home will need to arrive first at a place where we see them as comfortable presences; no one’s looking to share their domestic space with factory floor production robots, for instance. Kuri’s lineage in the entertainment realm is a smart starting point for a domestic bot, since it plays on our familiarity and affection for robots we’ve seen portrayed in film in television, including Wall-E, Rosie and R2D2. And if we’re ever going to elevate robots in the home beyond the level of automated appliances, that’s a necessary step toward greater adoption. Mayfield Robotics is asking pre-order customers to put down a $100 deposit now for Kuri reservations, with the balance of its $699 asking price due at shipment. It’ll still be almost a year before we see if its focus on personality is truly the right approach for broad acceptance, but from what I’ve seen, Mayfield is off to a very promising start.

Mayfield Robotics Announces Kuri, a $700 Mobile Home Robot

$
0
0

For about two years now, Mayfield Robotics has been working on something. A robot, we’d heard. Something helpful for the home. Not a vacuum. No screen, but a face. Without much in the way of (public) information on this secret robot, what kept us interested was the team: CTO and co-founder Kaijen Hsiao spent almost four years at Willow Garage, and co-founder and COO Sarah Osentoski led robotics R&D projects at Bosch for four years, including working with Bosch’s beta program PR2. And with funding from Bosch’s Startup Platform, Mayfield has been able to hire an enormous team of people, about 40 of them, in just a couple of years without making any public announcements whatsoever. Today, Mayfield is introducing Kuri, “an intelligent robot for the home.” Kuri is half a meter tall, weighs just over 6 kilograms, and is “designed with personality, awareness, and mobility, [that] adds a spark of life to any home.” Kuri has some fairly sophisticated technology inside of it. Besides what you’d expect (a camera, microphone array, speakers, and touch sensors), Kuri also has some sort of “laser-based sensor array” that it uses for obstacle detection, localization, and navigation. If Kuri can map your house by itself and then remember where things are, that would be slick, and we’re looking forward to seeing how much autonomy is there. We’re also unsure how much of what Kuri does relies on the cloud, but we do know that it’ll run for a couple hours straight, and then autonomously recharge itself on a floor dock. Besides mobility, what makes Kuri unique is the fact that it has no display (besides a color-changing light on its chest), and that it doesn’t even try to talk to you, as Pepper and Jibo do. There’s speech recognition, but Kuri won’t talk back, instead relying on a variety of beepy noises and its expressive head and eyes to communicate. Essentially, it’s R2-D2-ing, which is a verb now, meaning to have effective non-speech interactions. I like this idea, because so much of what makes us frustrated with AI assistants is their inability to reliably respond like a human would. When something talks to you, you can’t help but expect it to communicate like a human, and when it inevitably fails, it’s annoying. Kuri sidesteps this by not giving you a chance to think that it’s trying to be human at all, theoretically making it much harder to disappoint.  You, like us, may have a few questions at this point. Questions like, “What does Kuri do?” Here’s everything the press materials say about that: Kuri is built to connect with you and helps bring technology to life. Kuri can understand context and surroundings, recognize specific people, and respond to questions with facial expressions, head movements, and his unique lovable sounds. Like many adored robots in popular culture, his personality and ability to connect are his greatest attributes. Or more specifically, in the context of Kuri’s hardware and software (which you interface with through an app): A built-in HD camera so you can check-in on the house or pets while you’re away; A four-microphone array, powerful dual speakers, and Wi-Fi + Bluetooth connectivity, so he can react to voice commands or noises, play music, read the kids a bedtime story, or follow you around playing podcasts while you’re getting ready for work; Easily programmable tasks and IFTTT capabilities to connect within modern smart homes. Many of these capabilities can be found in Amazon Echo or Google Home, and if you’re looking for that social component, Pepper and Jibo, again, offer something comparable. What Kuri has going for it, from what we can tell, is simplicity to some extent, but more importantly, mobility. We’re hoping that Mayfield will be able to tell us how their little robot will be uniquely valuable in ways that these other systems aren’t, and fortunately, we’re getting a chance to ask them later today. We’ve got a meeting booked for this afternoon with CTO Kaijen Hsiao, and we should have time for both a demo and an interview. Let us know in the comments if there are any specific questions you’d like answers to. If you’re already sold on Kuri, a US $100 deposit towards the $699 total cost will save you one for delivery in time for the holidays in 2017. [ Kuri ] via [ Mayfield Robotics ]

Amazon now has 45,000 robots in its warehouses

$
0
0

Amazon significantly expanded its army of warehouse robots over the course of 2016, according to a report by The Seattle Times. The newspaper — based in the same city as Amazon's global headquarters — wrote last week that the ecommerce giant now has 45,000 robots across 20 fulfillment centres. That's reportedly an increase of 50% on the same time the year before, when the company said it had 30,000 robots working alongside 230,000 people. Amazon bought a robotics company called Kiva Systems in 2012 for $775 million (£632 million). Kiva's robots automate the picking and packing process at large warehouses in a way that stands to help Amazon become more efficient. The robots — 16 inches tall and almost 145kg — can run at 5mph and haul packages weighing up to 317kg. When Amazon acquired Kiva, Phil Hardin, Amazon's director of investor relations, said: "It's a bit of an investment that has implications for a lot of elements of our cost structure, but we’re happy with Kiva. It has been a great innovation for us, and we think it makes the warehouse jobs better, and we think it makes our warehouses more productive." Amazon also uses other types of robots in its warehouses, including large robotic arms that can move large pallets of Amazon stock. The company has been adding about 15,000 robots year-on-year, based on multiple previous reports. At the end of 2014, Amazon said it had 15,000 robots operating across 10 warehouses. In 2015, that number rose to 30,000 and now Amazon has 45,000. Last April, Amazon chief financial officer Brian Olsavsky reportedly said at a robots conference: "We've changed, again, the automation, the size, the scale many times, and we continue to learn and grow there." Olsavsky added that the number of robots used varies from warehouse to warehouse, saying that some are "fully outfitted" in robots, while others don't have "robot volume" for economic reasons. Beyond the warehouse, Amazon is also looking at automating other aspects of its business. Last December, the company announced that it had made its first delivery by an automated drone in the UK. It's also filed a patent that would allow it to use automated drones to deliver packages from large airships in the future.

After Mastering Singapore’s Streets, NuTonomy’s Robo-taxis Are Poised to Take on New Cities

$
0
0

Editor's Pick Take a short walk through Singapore’s city center and you’ll cross a helical bridge modeled on the structure of DNA, pass a science museum shaped like a lotus flower, and end up in a towering grove of artificial Supertrees that pulse with light and sound. It’s no surprise, then, that this is the first city to host a fleet of autonomous taxis. Since last April, robo-taxis have been exploring the 6 kilometers of roads that make up Singapore’s One-North technology business district, and people here have become used to hailing them through a ride-sharing app. Maybe that’s why I’m the only person who seems curious when one of the vehicles—a slightly modified Renault Zoe electric car—pulls up outside of a Starbucks. Seated inside the car are an engineer, a safety driver, and Doug Parker, chief operating officer of nuTonomy, the MIT spinout that’s behind the project. The car comes equipped with the standard sensor suite for cars with pretensions to urban autonomy: lidars on the roof and around the front bumper, and radar and cameras just about everywhere else. Inside, the car looks normal, with the exception of three large buttons on the dashboard labeled Manual, Pause, and Autonomous, as well as a red emergency stop button. With an okay from the engineer, the safety driver pushes the Autonomous button, and the car sets off toward the R&D complex known as Fusionopolis. By the end of this year, nuTonomy expects to expand its fleet in Singapore from six cars to dozens, as well as adding a handful of test cars on public roads in the Boston area, near its Cambridge headquarters, and one or two other places. “We think Singapore is the best place to test autonomous vehicles in the world,” Parker tells me as the car deftly avoids hitting a double-parked taxi. One-North offers a challenging but not impossible level of complexity, with lots of pedestrians, a steady but rarely crushing flow of vehicle traffic, and enough variability to give the autonomous cars what they need to learn and improve. Riding in an autonomous car makes you acutely aware of just how many potentially dangerous behaviors we ignore when we’re behind the wheel. Human drivers know from experience what not to worry about, but nuTonomy’s car doesn’t yet, so it reacts to almost everything, with frequent (and occasionally aggressive) attempts at safety. If the car has even a vague suspicion that a pedestrian might suddenly decide to cross the road in front of it, it will slow to a crawl. This mistrust of pedestrians as well as other drivers was designed into the software. “Humans are by far our biggest challenge,” Parker says. Over the course of 15 minutes, our car has to deal with people walking in the gutter, cars drifting across the centerline, workers repairing the road, taxis cutting across lanes, and buses releasing a swarm of small children. Even a human driver would have to concentrate, and it’s unsurprising that the safety driver sometimes has to take over and reassure the car that it’s safe to move. To handle these complex situations, nuTonomy uses formal logic, which is based on a hierarchy of rules similar to Asimov’s famous Three Laws of Robotics. Priority is given to rules like “don’t hit pedestrians,” followed by “don’t hit other vehicles,” and “don’t hit objects.” Less weight is assigned to rules like “maintain speed when safe” and “don’t cross the centerline,” and less still to rules like “give a comfortable ride.” The car tries to follow all of the rules all the time, but it breaks the less important ones first: If there’s a car idling at the side of the road and partially blocking the lane, nuTonomy’s car can break the centerline rule in order to maintain its speed, swerving around the stopped car just as any driver would. The car uses a planning algorithm called RRT*—pronounced “r-r-t-star”—to evaluate many potential paths based on data from the cameras and other sensors. (The algorithm is a variant of RRT, or rapidly exploring random tree.) A single piece of decision-making software evaluates each of those paths and selects the path that best conforms to the rule hierarchy. By contrast, most other autonomous car companies rely on some flavor of machine learning. The idea is that if you show a machine-learning algorithm enough driving scenarios—using either real or simulated data—it will be able to figure out the underlying rules of good driving, then apply those rules to scenarios that it hasn’t seen before. This approach has been generally successful for many self-driving cars, and in fact nuTonomy is using machine learning to help with the much different problem of interpreting sensor data—just not with decision making. That’s because it’s very hard to figure out why machine-learning systems make the choices they do. “Machine learning is like a black box,” Parker says. “You’re never quite sure what’s going on.” Formal logic, on the other hand, gives you provable guarantees that the car will obey the rules required to stay safe even in situations that it’s otherwise completely unprepared for, using code that a human can read and understand. “It’s a rigorous algorithmic process that’s translating specifications on how the car should behave into verifiable software,” explains nuTonomy CEO and cofounder Karl Iganemma. “That’s something that’s really been lacking in the industry.” Gill Pratt, CEO of the Toyota Research Institute, agrees that “the promise of formal methods is provable correctness,” while cautioning that it’s “more challenging to apply formal methods to a heterogeneous environment of human-driven and autonomous cars.” nuTonomy is quickly gaining experience in these environments, but it recognizes that these things take time. “We’re strong believers that this is going to make roads much, much safer, but there are still going to be accidents,” says Parker. Indeed, one of nuTonomy’s test vehicles got into a minor accident in October. “What you want is to be able to go back and say, ‘Did our car do the right thing in that situation, and if it didn’t, why didn’t it make the right decision?’ With formal logic, it’s very easy.” The ability to explain what’s happened will help significantly with regulators. So will the ability to show them just what fix you’ve made so that the same problem doesn’t happen again. Effective regulation is critical to the success of autonomous cars, and it’s a challenging obstacle in many of the larger auto markets. In the United States, for example, federal, state, and local governments have created a hodgepodge of regulations related to traffic, vehicles, and driving. And in many areas, technology is moving too fast for government to keep up. A handful of other companies are testing autonomous taxis and delivery vehicles on public roads, including Uber in Pittsburgh. The motive is obvious: When robotic systems render human drivers redundant, it will eliminate labor costs, which in most places far exceed what fleet operators will pay for their autonomous vehicles. The economic potential of autonomous vehicles may be clear. But what’s less clear is whether regulators will approve commercial operations anytime soon. In Singapore, the city-state’s government is both more unified and more aggressive in its pursuit of a self-driving future. “We’re starting with a different philosophy,” explains Lee Chuan Teck, deputy secretary of Singapore’s Ministry of Transport. “We think that our regulations will have to be ready when the technology is ready.” Historically, Singapore has looked to the United States and Europe for guidance on regulations like these, but now it’s on its own. “When it came to autonomous vehicles, we found that no one was ready with the regulations, and no one really knows how to test and certify them,” says Tan Kong Hwee, the director for transport engineering of the Singapore Economic Development Board. Singapore’s solution is to collaborate with local universities and research institutions, as well as the companies themselves, to move regulations forward in tandem with the technology. Parker says that these unusually close ties between government, academia, and industry are another reason nuTonomy is testing here. Singapore has good reason to be proactive: Its 5.6 million people are packed into just over 700 square kilometers, resulting in the third most densely populated country in the world. Roads take up 12 percent of the land, nearly as much as is dedicated to housing, and as the population increases, building more roads is not an option. The government has decided to make better use of the infrastructure it has by shifting from private cars (now used for nearly 40 percent of trips) to public transit and car shares. Rather than spending 95 percent of their time parked, as the average car does today, autonomous cars could operate almost continuously, reducing the number of cars on Singapore’s roads by two-thirds. And that’s with each car just taking one person at a time: Shared trips could accommodate a lot more people. Over the next three to five years, Singapore plans to run a range of trials of autonomous cars, autonomous buses, autonomous freight trucks, and even autonomous utility vehicles. The goal will be to understand how residents use autonomous vehicle technology in their daily lives. Beyond that, Lee says, Singapore is “about to embark on a real town that we’re developing in about 10 to 15 years’ time, and we’re working with the developers from scratch on how we can incorporate autonomous vehicle technology into their plans.” Building new communities from scratch, such as One-North, is a Singaporean specialty. In this new town, most roads will be replaced with paths just big enough for small autonomous shuttles. For longer trips, on-demand autonomous cars and buses will travel mostly underground, waiting in depots outside the city center until they’re summoned. It’s a spacious, quiet vision, full of plazas, playgrounds, and parks—and practically no parking spaces. To begin to meet this challenge, nuTonomy has partnered with Grab, an Asian ride-sharing company, making autonomous taxi services available to a small group of commuters (chosen from thousands of applicants) around One-North. Testing the taxis in a real application like this is important, but equally important is understanding how users interact with the cars once they stop being a novelty and start being a useful way to get around. “People very quickly start to trust the car,” says Parker. “It’s amazing how quickly it becomes normal.” If all goes well, Parker adds, the company should be ready to offer commercial service through Grab—to all customers, not just preapproved ones—around the One-North area in 2018. At first, each taxi will have a safety driver, but nuTonomy is working on a way to allow a human to remotely supervise the otherwise autonomous car when necessary. Eventually, nuTonomy will transition to full autonomy with the option for teleoperation. “The whole structure of cities is going to change,” Parker predicts. “I think it’s going to be the biggest thing since the beginning of the automobile age.” This article appears in the January 2017 print issue as “Hail, Robo-taxi!”

CES 2017: AR, VR, and IoT will be hot, 3D Printing Not

$
0
0

This week sees the annual consumer technology extravangaza that is the CES 2017 show in Las Vegas. Once almost an afterthought, technologically speaking, consumer electronics have become increasingly important in driving the entire global tech industry. What products companies choose to bring to the show often represent an interesting tension between hard-nosed calculations and corporate wish-fufillment about the direction tech is expected to take in the coming months and years. At CES 2017 we in the IEEE Consumer Electronics Society expect to see a reduced focus on drones compared to 2016. Drones haven't gone away, but there are few solid practical applications for most consumers. Still, small inexpensive drones could be a growth area as toys and hobby vehicles. Instead we expect to see a lot more focus on augmented reality (AR), virtual reality (VR), and home health. (And, or course, the occasional surprising and interesting product or announcement.) There are a many long and short form VR projects ongoing (both professional and amateur), helped by the availability of consumer versions of selfie stick VR systems along with a variety of cameras. Social media sites and YouTube now offer 360 video support as a matter of course, also helping to drive adoption. Wearables will be important, although the smart watch market hasn't picked up as fast as many had hoped. These really need to find their killer applications (perhaps some AR application using phones and watches such as we’ve seen with Pokemon Go). There will be an increase in Internet of Things (IoT) consumer applications (we look forward to seeing this year’s incarnation of the proverbial smart fridge) as well as cloud-based IoT offerings that provide services to consumers. Wearable and cloud-based IoT services will also mean we’ll be seeing AI and maching learning applications. These applications could be big enablers of new consumer services running on wearable devices as well as household voice activated products from Amazon, Google and other companies. For example, voice control will be a big theme at CES 2017 with new product introductions by Amazon, Google and others. Machine intelligence will also make still and video images more useful with increasing capabilities for image recognition. Large enterprise companies with strong machine learning capabilities will be showing how data from connected intelligent consumer devices will enable new ways to reach customers and offer them additional services. I would also expect that there will be a greater focus on security and privacy, with the proliferation of connected consumer devices and recent reports that some of these devices have been hijacked as bots in denial of service attacks. Greater security and anonymity for shared content will be important safeguards to make sure that consumers feel safe with their connected devices and services. Turning to televisions, 4K TV’s now have a standard that takes full advantage of their potential, including expected HDR (high dynamic rage images) as well as their resolution and color capabilities. Coupled with decreasing prices, these TVs should see greater pickup by both leading-edge consumers and the higher end of mainstream consumers. Many consumers are increasingly considering 4K TVs for their next replacement TV. So lower cost 4K TVs will be a big presence at CES. In addition, UHD (ultra-high definition) streaming services will be present, as well as Blu-ray disc UHD, players that will provide content for viewing on these displays. (Almost all new content is captured in at least 4K nowadays.) On a smaller scale, there could be more maker-oriented items as well as craft projects including micro-brewing (both coffee and beer) at  CES 2017, although I don't expect to see the big 3D printer displays we saw the last few years. However, unusual 3D printing (printed pancakes anyone?) could be sneak hits at the 2017 show. Finally, automobile technology will continue to play a big role at CES as more and more autonomous driving functions are included in new model cars. This will also include tying consumer applications into automobiles and mobile activities. Towards the end of the CES show, on January 8, the 2017 IEEE ICCE Conference will have a focus on Virtual and Augmented Lifestyles. The ICCE conference focuses on consumer technologies that will be the hottest thing thee years from today. As a teaser of what’s to come, new for this year are tracks organized with the IEEE Biometrics and RFID Councils, the IEEE Cloud Computing initiative and the IEEE Society for the Social Implications of Technology.

Did Inadequate Women’s Healthcare Destroy Star Wars’ Old Republic?

$
0
0

The central, overarching conflict of the first six Star Wars movies is that a democratic republic devolves into an authoritarian dictatorship. A key part of that political coup is Anakin Skywalker turning to the dark side and becoming Darth Vader. The way the story is told implies that the fall of the Republic and fall of Anakin Skywalker are linked—young Anakin is prophesied as the one to “bring balance to the Force.” Indeed, Darth Vader’s redemption and death coincide with the fall of the Empire and the rise of the New Republic. Anakin’s turn to the dark side begins in Episode II with the death of his mother, but it’s really the events of Episode III that are instrumental in changing him. And if you think about it, the trigger for his metamorphosis is extremely weird. Anakin Skywalker allies himself with Palpatine in hopes that he can use the dark side of the Force to save Padme Amidala from death in childbirth. Shortly after Padme announces to him that she’s pregnant, Anakin has a dream that she dies while giving birth. The dream feels similar the same one he had about his mother before she died. “It’s just a dream, honey,” Padme tells him the next morning. “Yeah, okay,” he replies, but the man never regains his chill. He seriously spends two hours of the movie freaking out about his wife’s uterus, and hypes himself up so much that he gets to the point of slaughtering tiny tots in a Jedi temple. All because he can’t think of another way to save Padme from reproductive health complications. Why didn’t they just go to a goddamned obstetrician-gynecologist? Padme Never Goes to a OB/GYN Prenatal visits never happen in Episode III, not even offscreen. Despite Anakin’s spiraling paranoia about Padme’s health, doctors or hospitals are bizarrely never mentioned. And the evidence says that Padme never got an ultrasound. When she confronts Anakin towards the end of the movie—shortly before giving birth—she refers to “our child,” rather than “our children.” It doesn’t make sense for her to be hiding the ball here, she’s making one last emotional appeal to the father of her children, to try to bring him back to the light side. Rather, Padme simply doesn’t know that she’s about to give birth to twins. Later, when she actually gives birth, everyone is taken aback by the revelation that she’s having babies in the plural. All of this points to one thing: Padme’s never had an ultrasound. In fact, Padme’s never had a prenatal check-up. OB/GYNs Probably Don’t Even Exist in the Star Wars Universe If there were any women’s healthcare available, there is no reason why Padme wouldn’t take advantage of it. For one thing, her husband is flipping the fuck out over her possibly dying in childbirth. Why didn’t she visit a doctor in an attempt to soothe his fevered mind? Even if access to reproductive health services is limited in this galaxy—as in ours—Padme is probably the woman best situated to get it. She’s a sitting Senator residing in Coruscant, the capital of the galaxy. She’s clearly a woman of means, given that she has three elaborate costume changes for every hour of the day. Padme is hanging out in a posh penthouse in the most populous city in the galaxy: if there’s medical assistance out there, she can get it. Furthermore, there is no bar to Padme and Anakin visiting the OB/GYN together. Although their marriage is a secret, Padme doesn’t hide the fact that she’s pregnant. She still attends Senate sessions, and when Obi-Wan visits her, her baby bump is evident and he even comments on it. Anakin has plenty of innocuous reasons to hang around Padme and even accompany her to a doctor’s office. She’s a Senator, and he’s a super magic law enforcement agent frequently assigned to protect politicians (including her, in Episode II). And if the couple were still super paranoid about visiting the doctor together, she could just go by herself. It’s not like “ANAKIN SKYWALKER IS THE SECRET FATHER OF MY BABY” is written on her cervix. Lack of Medical Options Makes Anakin Just Lose His Shit If you think your wife is going to die due to birth complications, your first thought isn’t going to be, “Yes, I should help install a dictatorship and then murder small children, to protect her.” But that is the conclusion that Anakin gets to. And fear for Padme’s life is the direct cause of that. Maybe Anakin is just extremely stupid (an explanation we can’t ignore), but given that we’ve established that Padme never got an ultrasound, it seems more likely that there were no medical options that they could turn to before his thought processes went straight to “Betray The Jedi Order And Everything I Believe In.” In a galaxy of bacta tanks that can heal grievous wounds, and highly advanced cybernetic prosthetics to replace limbs, reproductive health is stuck in the middle ages. At the end of Episode III, Anakin gets three limbs chopped off and then falls into hot lava. He lives. His wife has babies, under medical supervision. She dies. Padme’s “Will to Live” Since tweeting about the OB/GYN plot hole, I’ve received countless messages from men who are furious with me about letting “feminism ruin” Revenge of the Sith, which might be the first time that fans have ever crawled out of the woodwork to defend Star Wars prequels. A primary contention is that Padme didn’t die from medical causes, but merely “lost the will to live.” This is logically irrelevant. eclampsia, my point is that Anakin’s transformation into Darth Vader is triggered by his fear of childbirth complications. Padme dies after his turn to the dark side: her actual cause of death doesn’t matter.But that aside, let’s talk about how incredibly stupid “She has lost the will to live” actually is. Consider the provenance of the diagnosis. It’s a medical droid that first says that they don’t know the actual medical cause of her rapid deterioration. The droid then goes onto pronounce that she has “lost the will to live,” despite leading with an admission that they don’t know what’s wrong. How is that consistent? And why would a robot be programmed to detect “will to live”? In short, this droid is completely full of shit. That said, depression after giving birth, and death caused by emotional shock, are both real things. But they’re medical things, with diagnosable symptoms and actual medical remedies. The same thing goes for a death caused by Anakin Force-choking Padme when she goes to confront him. A Force-choke is still a choke, and a choke is a physical cause of injury. If any of these things actually caused Padme’s death, then this droid is just an incompetent fuck who doesn’t know what it’s doing. Death from “Broken Heart” Is Real There really is an actual medical phenomenon in which people die of a broken heart. It’s called Takotsubo cardiomyopathy, or “broken-heart syndrome.” Grief from a loved one’s death can trigger one’s own death. A study published by the New England Journal of Medicine in 2005 said that stress hormones can “stun” the heart into spasming. Most people recover, but some of them do die. But the point is that broken-heart syndrome manifests as literal heart failure: the left ventricle of the heart spasms the way it does in a conventional heart attack caused by blocked arteries. But cardiac failure isn’t mentioned by the medical droid. Being Real Sad After Giving Birth Is a Thing And neither is postpartum depression for that matter. Severe depression after having a baby is actually very common, and there are real treatments available. But in general, you don’t really just up and die from postpartum depression. However, you’d think that if Padme were dying from being very sad, someone would at least mention postpartum depression? You know, in passing. Maybe even eliminating it as a possible diagnosis. But no, in Revenge of the Sith, everything related to birth is just a big question mark hanging over the characters. Who even knoooooows how uteruses work? Sometimes they just kill people, randomly, because you get sad. Palpatine Did Not Kill Padme For days I’ve had hordes of men telling me that Palpatine stole Padme’s will to live. This makes absolutely no sense and it’s not just because that’s the stupidest thing I’ve ever heard. Let’s approach this logically, while looking at the canon. First off, there’s no mention or hint of this in the actual film. Secondly, there are no instances in any of the films in which a Jedi or Sith is able to harm someone else from a distance that requires hyperspace travel. Jedi and other Force-sensitive people are able to sense the deaths of others from a great distance. See, for example, Obi-Wan sensing the destruction of Alderaan while still in hyperspace (A New Hope), Yoda feeling the deaths of Jedi from all over the galaxy (Revenge of the Sith), or Leia sensing Han Solo’s death (The Force Awakens). But being able to actually strike at people, or even physically locate them is something else entirely. When Vader Force-chokes people, it’s usually people who are in the same room as him. The one exception is in the Empire Strikes Back, when he kills Admiral Ozzel while teleconferencing with him, and then promotes Captain Piett on the spot. During this incident, Admiral Ozzel’s ship is orbiting the planet Hoth, having just arrived in the system to attack the Rebel base. Vader is aboard Star Dreadnought Executor, and is also present for the battle of Hoth. Therefore, at most, a Force-choke is possible if the choker and the chokee are present in the same star system. And besides, if Palpatine could remotely kill or even locate someone across hyperspace distances via the Force, there would probably be a few people ahead of Padme on the list. For example, Obi-Wan Kenobi, who is known to be accompanying Padme. If Padme Died From Being Sad, It’s Directly Caused By Anakin’s Fear of Childbirth But back to the point: her actual cause of death is irrelevant. Even if we do accept the inane premise that she lost the will to live and died of being sad about Anakin, she’s only sad because Anakin has completely lost his shit after psyching himself out over her imminent death in childbirth. If Anakin hadn’t been frightened out of his mind about the deadly capacities of Padme’s fallopian tubes, he wouldn’t have turned to the Dark Side. This is what counts as dramatic irony for George Lucas, and while plenty can be said on his “writing,” that is neither here nor there. * Okay, fine, I can’t resist commenting on his writing. The giant OB/GYN plot hole isn’t really about the Star Wars universe having inadequate reproductive health care, it’s about Lucas lazily relying on a blanket of ignorance surrounding the entire phenomenon of childbirth. Childbirth is a black box that can explain anything that is difficult to explain, like how Anakin can turn on everyone he loves and all the principles he holds dear, or how Padme can just up and die without anything being visibly wrong. Reproductive health and childbirth is a crutch, and Lucas gets away with it because his audience accepts that these things are mysterious and cannot be intervened with the way that that the loss of limbs can be remedied with robot prosthetics, or the way Luke can be rescued from near-death on Hoth by being submerged in a bacta tank. Having babies is worse than being mauled by a wampa ice creature or being chopped up by lightsabers and falling into a river of lava. Lucas can write a world like that, and worse, the audience will accept it. But uteruses aren’t made of malignant magic. Women’s bodies are real physical things that can be studied and understood and when necessary, cured. The public at large should be better educated about reproductive health in general. Like ankle sprains, tooth decay, or heart attacks, reproductive health should be a banal medical thing that a lot of people know something about. The fact that there’s so much ignorance around it is a disgrace, a disgrace just as massive and overwhelming as the very existence of the Star Wars prequels. I guess what I’m saying is, maybe if the Galactic Senate hadn’t defunded Planned Parenthood, the Republic wouldn’t have succumbed to an evil fascist dictatorship.

CES 2017: Complete Coverage of the Best Emerging Tech

$
0
0

'; var videos = ''; var highlights = ''; var panels = ''; $('.feed-choice').click(function (event) { $('.feed-holder').html(eval($(this).attr('class').split(" ")[0])); }); $('div.articles-ces').isotope({ itemSelector: 'article.item', layoutMode: 'masonry', masonry: { columnWidth: 320, }, }); }); --> Our team coverage from the show floor, company events, and conference panels brings you the latest revolutionary gadgets, global trends, and accumulating advances in the technology that changes our daily lives. Check back throughout the week for the latest developments in wearables, virtual and augmented reality, automotive tech, consumer cybersecurity, the Internet of Things, mobile tech, and more from this year’s Consumer Electronics Show in Las Vegas, Nevada. Choose Stream: Highlights Firehose Panels/Events A Bosch-backed startup introduces a cute little mobile robot Follow Spectrum’s coverage throughout the week to see if we’re right

Measure a magnet’s strength with this DIY Gauss Meter

$
0
0

You may know that a neodymium magnet is more powerful than something you usually find on a refrigerator, but by how much? Most people, even those willing to harvest magnets from disk drives, accept that some magnets are stronger than others. This, however, wasn’t quite good enough for Anthony Garofalo, who instead converted a prototype voltmeter he made using an Arduino Uno and a tiny OLED display into something that displays the magnetic, or Gauss level. It also shows whether it’s observing the north or south pole of the magnet, which certainly could be useful in some situations. Though full documentation isn’t available right now, Garofalo says that he’ll make it available once he repackages everything in a smaller format with an enclosure. If you’d like to see more of his work, including the voltmeter he based this off of, be sure to check out his Instructables page and some other neat stuff on his YouTube channel! This entry was posted by Arduino Team on Tuesday, January 3rd, 2017 and is filed under Arduino, Featured, Uno. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. You must be logged in with your Arduino account to post a comment. How to control Arduino board using an Android phone Arduino IDE 1.6 is released! Download it now DIY less-expensive Thermal imaging camera Send in the clones My open-source, do-it-yourself cellphone (built with Arduino). Welcome Arduino Yún – the first board combining Arduino with Linux A low-cost robotic hand (tutorial) mirroring your own fingers Microsoft and Arduino: new partnership announced today Nice drawings of the Arduino UNO and Mega 2560 Arduino IR Remote Control David Cuartielles (Spanish) MAKE: Blog’s Arduino archive Tom Igoe’s PComp Site

Quantum computers ready to leap out of the lab in 2017

$
0
0

Quantum computing has long seemed like one of those technologies that are 20 years away, and always will be. But 2017 could be the year that the field sheds its research-only image. Computing giants Google and Microsoft recently hired a host of leading lights, and have set challenging goals for this year. Their ambition reflects a broader transition taking place at start-ups and academic research labs alike: to move from pure science towards engineering. “People are really building things,” says Christopher Monroe, a physicist at the University of Maryland in College Park who co-founded the start-up IonQ in 2015. “I’ve never seen anything like that. It’s no longer just research.” Google started working on a form of quantum computing that harnesses superconductivity in 2014. It hopes this year, or shortly after, to perform a computation that is beyond even the most powerful ‘classical’ supercomputers — an elusive milestone known as quantum supremacy. Its rival, Microsoft, is betting on an intriguing but unproven concept, topological quantum computing, and hopes to perform a first demonstration of the technology. The quantum-computing start-up scene is also heating up. Monroe plans to begin hiring in earnest this year. Physicist Robert Schoelkopf at Yale University in New Haven, Connecticut, who co-founded the start-up Quantum Circuits, and former IBM applied physicist Chad Rigetti, who set up Rigetti in Berkeley, California, say they expect to reach crucial technical milestones soon. Academic labs are at a similar point. “We have demonstrated all the components and all the functions we need,” says Schoelkopf, who continues to run a group racing to build a quantum computer at Yale. Although plenty of physics experiments still need to be done to get components to work together, the main challenges are now in engineering, he and other researchers say. The quantum computer with the most qubits so far — 20 — is being tested in an academic lab led by Rainer Blatt at the University of Innsbruck in Austria. Whereas classical computers encode information as bits that can be in one of two states, 0 or 1, the ‘qubits’ that comprise quantum computers can be in ‘superpositions’ of both at once. This, together with qubits’ ability to share a quantum state called entanglement, should enable the computers to essentially perform many calculations at once. And the number of such calculations should, in principle, double for each additional qubit, leading to an exponential speed-up. This rapidity should allow quantum computers to perform certain tasks, such as searching large databases or factoring large numbers, which would be unfeasible for slower, classical computers. The machines could also be transformational as a research tool, performing quantum simulations that would enable chemists to understand reactions in unprecedented detail, or physicists to design materials that superconduct at room temperature. “I tell my students that 2017 is the year of braiding.” There are many competing proposals for how to build qubits. But there are two front runners, confirmed in their ability to store information for increasingly long times — despite the vulnerability of quantum states to external disturbance — and to perform quantum-logic operations. One approach, which Schoelkopf helped to pioneer and which Google, IBM, Rigetti and Quantum Circuits have adopted, involves encoding quantum states as oscillating currents in superconducting loops. The other, pursued by IonQ and several major academic labs, is to encode qubits in single ions held by electric and magnetic fields in vacuum traps. John Martinis, who worked at the University of California, Santa Barbara, until Google hired him and his research group in 2014, says that the maturity of superconducting tech­nologyprompted his team to set the bold goal of quantum supremacy. The team plans to achieve this using a ‘chaotic’ quantum algorithm that produces what looks like a random output (S. Boixo et al. Preprint at https://arxiv.org/abs/1608.00263; 2016). If the algorithm is run on a quantum computer made of relatively few qubits, a classical machine can predict its output. But once the quantum machine gets close to about 50 qubits, even the largest classical supercomputers will fail to keep pace, the team predicts. The results of the calculation will not have any uses, but they will demonstrate that there are tasks at which quantum computers are unbeatable — an important psychological threshold that will attract the attention of potential customers, Martinis says. “We think it will be a seminal experiment.” But Schoelkopf does not see quantum supremacy as “a very interesting or useful goal”, in part because it dodges the challenge of error correction: the ability of the system to recover its information following slight disturbances to the qubits, which becomes more difficult as the number of qubits increases. Instead, Quantum Circuits is focused on making fully error-corrected machines from the start. This requires building in more qubits, but the machines could also run more-sophisticated quantum algorithms. Monroe hopes to reach quantum supremacy soon, but that is not IonQ’s main goal. The start-up aims to build machines that have 32 or even 64 qubits, and the ion-trap technology will enable their designs to be more flexible and scalable than superconducting circuits, he says. Microsoft, meanwhile, is betting on the technology that has the most to prove. Topological quantum computing depends on excitations of matter that encode infor­mation by tangling around each other like braids. Information stored in these qubits would be much more resistant to outside disturbance than are other technologies and would, in particular, make error correction easier. No one has yet managed to create the state of matter needed for such excitations, let alone a topological qubit. But Microsoft has hired four leaders in the field, including Leo Kouwenhoven of the University of Delft in the Netherlands, who has created what seems to be the right type of excitation. “I tell my students that 2017 is the year of braiding,” says Kouwenhoven, who will now build a Microsoft lab on the Delft campus. Other researchers are more cautious. “I am not making any press releases about the future,” says Blatt. David Wineland, a physicist at the National Institute of Standards and Technology in Boulder, Colorado, who leads a lab working on ion traps, is also unwilling to make specific predictions. “I’m optimistic in the long term,” he says, “but what ‘long term’ means, I don’t know.”

The whole philosophy community is mourning Derek Parfit. Here's why he mattered.

$
0
0

Derek Parfit, who died at age 74 on Sunday evening, was not the most famous philosopher in the world. But he was among the most brilliant, and his papers and books have had a profound, incalculably vast impact on the study of moral philosophy over the past half century. His work did not dwell on topics of merely academic interest. He wrote about big topics that trouble everyone, philosopher and layperson alike: Who am I? What makes me “me”? What separates me from other people? How should I weigh my desires against those of others? What do I owe to my children, and to the future in general? What does it mean for an action to be right or wrong, and how could we know? Parfit was not a prolific author; he tended to write his books over the course of decades, refining them repeatedly after discussions with colleagues and students. In the end, he wrote only two: 1984’s Reasons and Persons, and 2011’s On What Matters, a two-volume, 1,440 page tome whose third volume is still yet to be published. But both are classics, the latter generating such furious debate that a volume of essays discussing it was released two years before the book itself even came out (most of the key arguments had circulated in draft form for some time). For an excellent overview of Parfit’s life and the major themes of his work, I highly recommend Larissa MacFarquhar’s beautiful and incisive New Yorker profile, published as On What Matters finally hit shelves. But perhaps the best way to experience Parfit’s writing, and understand why both his ideas and his method of articulating them proved so influential, is to dig into a few of his most important and fascinating arguments. If there’s a single idea with which Parfit is most strongly identified, it’s the view that personal identity — who you are, specifically, as a person — doesn’t matter. This argument, made in the 1971 paper “Personal Identity” and in the third section of Reasons and Persons, is jarring at first, but his case is persuasive, and the implications are profound. Parfit asks us to imagine that he is fatally injured in an accident, but his brain is mostly unharmed. His two brothers are also in the accident, and emerge brain-dead, but with otherwise healthy bodies. Doctors then split his healthy brain in half, and implant a half in each of his brothers’ bodies. “Each of the resulting people believes that he is me, seems to remember living my life, has my character, and is in every other way psychologically continuous with me,” Parfit writes in Reasons and Persons. “And he has a body that is very like mine.” He then asked: What happened to Derek Parfit in all this? Did he die? That can’t be right; if anything, he doubled. Two people are now walking around with his memories and experiences and thoughts and attitudes, rather than one. Did he continue to exist? If so, as who? Is he one of the brothers, but not the other? Why just that one and not the other? Is he both — are two people simultaneously Derek Parfit? That can’t be right either. If that were true, then if only one of them died, then it would be the case that Derek Parfit is both alive and has died. That’s ridiculous. His answer is that this question, of what happened to “Derek Parfit,” is empty. It’s unanswerable, and even if it could be answered, it wouldn’t tell you anything new about the world. “There will be two future people, each of whom will have the body of one of my brothers, and will be fully psychologically continuous with me, because he has half of my brain,” he writes. “Knowing this, we know everything.” If identity mattered, then this result would be just as bad as death, since both erase his identity. But this clearly isn’t as bad as death; his psychological being gets to keep on going, twice! So identity isn’t what matters. “Since I cannot be one and the same person as the two resulting people, but my relation to each of these people contains what fundamentally matters in ordinary survival, the case shows that identity is not what matters,” he concludes. “What matters is … psychological connectedness and/or psychological continuity, with the right kind of cause.” This implies a lot of counterintuitive things about people’s relationships to their future selves, and to each other. For one thing, it makes selfishness a lot less defensible. When I decline to give $500 to charity and instead go on a vacation to Mexico, I’m privileging my future self above another person. But if personal identity isn’t what matters, then the fact that going on vacation benefits a future-person who happens to be physically and psychologically continuous with present-me seems less important. I’m separated from my future self by time; I’m separated from the people the charity would help by space. Why treat the latter separation as more important? This view — that our distance from our future and past selves is greater than we might imagine, and our distance from other people’s present selves is smaller — has a lot in common with traditional Buddhist teaching. An appendix to Reasons and Persons is devoted to drawing out this similarity. After being introduced to the book by Harvard philosopher Dan Wikler, a Tibetan monastery in northern India began chanting passages from Reasons and Persons along with more traditional material. And this argument also influenced how Parfit thought about his own death. One particularly poignant passage from the book has circulated among former students and colleagues on social media since his passing: When I believed [that personal identity is what matters], I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others. When I believed [that personal identity is what matters], I also cared more about my inevitable death. After my death, there will be no one living who will be me. I can now redescribe this fact. Though there will later be many experiences, none of these experiences will be connected to my present experiences by chains of such direct connections as those involved in experience-memory, or in the carrying out of an earlier intention. Some of these future experiences may be related to my present experiences in less direct ways. There will later be some memories about my life. And there may later be thoughts that are influenced by mine, or things done as the result of my advice. My death will break the more direct relations between my present experiences and future experiences, but it will not break various other relations. This is all there is to the fact that there will be no one living who will be me. Now that I have seen this, my death seems to me less bad. Ignoring personal identity makes a lot of ethical problems a lot easier. Consider the dilemma posed by a prospective mother who’s informed that she has an asymptomatic disease that poses no risk to her well-being, but would result in her offspring having a considerably shorter lifespan. But if she receives a couple of shots, free of charge, over six months to cure the disease, then any children she has thereafter will live a full life. Intuitively, most people would say that it’s better to wait — the would-be kid is better off for having a doubled lifespan. If personal identity mattered, though, saying that becomes harder. After all, the possible kid six months from now and the possible kid right now are two totally different kids — just as actual siblings are different individuals. By delaying, the mother isn’t making any currently existing kid better or worse off; she’s just causing one to exist, and preventing the other from ever existing. If personal identity matters, this is a conundrum. But since Parfit has argues that personal identity doesn’t matter, it suffices to say that if some individual is brought into existence, it’s better that said individual live a longer rather than shorter life. But, as Parfit showed in the concluding section of Reasons and Persons, this creates new difficulties. In 1984, when he was writing, fears of overpopulation and of a “population bomb” that led to exponential growth in the human population, with average living standards plummeting until such a point that the whole world is living in a miserable slum together, were very prominent. Amidst that debate, he offered an argument suggesting that a nightmare overpopulation world might actually be an improvement, so long as the people in it were still better off alive than dead. He thought this conclusion was horrifying and worked for decades on a way to avoid it, but he concluded it’s very difficult indeed to avoid. He asks us to compare four possible worlds: A, A+, B, and “divided B” (or B-). Each has a different-sized population, indicated by the bars’ widths, and a different level of average well-being for each population and sub-population contained within it, as indicated by the bars’ heights: A is a world with a relatively small population with very high average well-being; A+ is that world, plus an equally-sized group of somewhat less well-off individuals. B- is a world with two A-sized groups with high average well-being, but not quite as high as A. And B is just the two groups in B- squished together. So, which is the best of these groups? Well, it seems like A+ is better than A. All that happened was a new group of people was added, people who presumably enjoy their lives and would rather exist than not exist. If that’s the only change, it seems like a good one! B-, in turn, seems better than A+. The average level of well-being for both groups in B- is higher than for the combined population of A+, and the total amount of well-being, if you add it all up, is higher as well. And B- seems just as good as B, since they’re identical; B- is just the split-up version of B. The conclusion you’re left with is that B > B- > A+ > A, and, accordingly, that B > A. It’s better for there to be twice as many people, even if each of them is a little bit worse off than in a world with a lower population. This is where things get tricky. Presumably there’s a group C, that’s twice as big as B with slightly lower average happiness, that’s better than B. And then a group D in turn. And then a group E. And then on and on until you get group Z: Z is a world with vastly, vastly more people — 100 billion, or 200 billion, say — all living lives that are just barely worth living. Parfit’s reasoning suggests that this is better than a much smaller world where people are, on average, much happier. This ludicrous-sounding suggestion he deemed the “repugnant conclusion,” and it’s extremely difficult to figure out a way to think about ethics that avoids it. You could say that what matters is average well-being, not total, and so say that A+ is in fact worse than A because the addition of less well-off people brings down the average. But then you’ve backed yourself into saying that it’s bad to bring perfectly happy people into existence, even when doing so makes no one else worse off. That’s also ridiculous. One of the more common responses is to say that world Z can’t be better, because it’s not making any specific, actual people better off; it’s making everyone on Earth now worse off, and then adding another 93 billion on top of that. That argument is compelling until you remember that personal identity doesn’t matter. If you buy Parfit’s argument about identity, then it shouldn’t matter much whether specific people are made better or worse off, it just matters that there are people, whoever they are, who collectively are better or worse off. Some philosophers have responded by just accepting the repugnant conclusion as true and trying to think up explanations for why it might not be so implausible (for instance, maybe a life “barely worth living” isn’t that much worse than the most happy life possible). At the time of his death, Parfit was working on fleshing out his “theory X,” the term he’s used since Reasons and Persons for a theory that can compellingly avoid the repugnant conclusion. I read an early draft as a college senior back in 2012 during a seminar Parfit conducted; it was striking, then as now, how profound and original a puzzle Parfit had managed to pose that he and many of his colleagues were still trying to sort out possible answers three decades later. As befits its title, Parfit’s last and longest book On What Matters sprawled across a great variety of topics. It’s broadly interested in what reasons people have to act in certain ways, or hold certain beliefs, or desire certain things. A lot of those questions have to do with morality, but some don’t. Perhaps the greatest joy of reading it is spotting the occasional diversions, the odd moments here and there where he makes an aside from the main narrative, often concisely expressing what would take others of us pages and pages to articulate. One of my very favorite passages of the book is about art, and music specifically, and whether we can have reasons to enjoy or be moved by a piece of art. Parfit was himself a rather accomplished artist; he was a poet in his youth, and published a poem in the New Yorker in 1962, when he was 19 and working as a researcher for the magazine. He later took up photography. Parfit was hardly primarily interested in aesthetics, but it was still fascinating to see him apply his approach to ethics to the analysis of art: It is sometimes claimed that we have reasons to enjoy, or be thrilled or in other ways moved by, great artistic works. In many cases, I believe, this claim is false. We can have reasons to want to enjoy, or be thrilled or moved by, these artistic works. But these are not reasons to enjoy, or to be thrilled or moved by, these works. We do have reasons to admire some novels, plays or poems, given the importance of some of the ideas that they express. But poetry is what gets lost in the translation, even if this translation expresses the same ideas. And we never have reasons to enjoy, or be moved by, great music. If we ask what makes some musical passage so marvelous, the answer might be ‘Three modulations to distant keys’. This answer describes a cause of our response to this music, not a reason. Modulations to distant keys are like the herbs, spices, or other ingredients that can make food delicious. When someone neither enjoys nor is moved by some great musical work, this person is not in any way less than fully rational, by failing to respond to certain reasons. In comparing music with food in this way, I am not belittling music, ranking it below novels, plays, or poems. Music is at least as great as the other arts. Without music, Nietszche plausibly (though falsely) said, life would be an error. But music is also the lost battlefield and graveyard of most general aesthetic theories. While much of his work had profound practical implications, Parfit was not primarily a public intellectual concerned with convincing the public to change how it thinks about morality, the way that his friend and colleague Peter Singer is. But Parfit, like Singer, participated in the effective altruism movement, which argues that people have an obligation to do what they can to improve the world through actions like adopting vegetarianism, donating at least 10 percent of your income to effective charities, and so forth. After his passing, Singer shared the conclusion of the soon-to-be-published third volume of On What Matters, which touches on these topics, and in particular argues for the importance of staving off existential risks that threaten the future of humanity, risks like global warming, pandemics, nuclear annihilation, and so on: I regret that, in a book called On What Matters, I have said so little about what matters. I hope to say more in what would be my Volume Four. I shall end this volume with slight revisions of some of my earlier claims. One thing that greatly matters is the failure of we rich people to prevent, as we so easily could, much of the suffering and many of the early deaths of the poorest people in the world. The money that we spend on an evening’s entertainment might instead save some poor person from death, blindness, or chronic and severe pain. If we believe that, in our treatment of these poorest people, we are not acting wrongly, we are like those who believed that they were justified in having slaves. Some of us ask how much of our wealth we rich people ought to give to these poorest people. But that question wrongly assumes that our wealth is ours to give. This wealth is legally ours. But these poorest people have much stronger moral claims to some of this wealth. We ought to transfer to these people, in ways that I mention in a note, at least ten per cent of what we earn. What now matters most is how we respond to various risks to the survival of humanity. We are creating some of these risks, and discovering how we could respond to these and other risks. If we reduce these risks, and humanity survives the next few centuries, our descendants or successors could end these risks by spreading through this galaxy. Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea. If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists.

I’ve left Twitter. It is unusable for anyone but trolls, robots and dictators

$
0
0

I deactivated my Twitter account today. It was more of a spontaneous impulse than a New Year resolution, although it does feel like a juice cleanse, a moulting, a polar-bear plunge, a clean slate (except the opposite – like throwing your slate into a volcano and running). One moment I was brains-deep in the usual way, half-heartedly arguing with strangers about whether or not it’s “OK” to suggest to Steve Martin that calling Carrie Fisher a “beautiful creature” who “turned out” to be “witty and bright as well” veered just a hair beyond Fisher’s stated boundaries regarding objectification (if you have opinions on this, don’t tweet me – oh, wait, you can’t); and the next moment the US president-elect was using the selfsame platform to taunt North Korea about the size and tumescence of its nuclear program. And I realised: eh, I’m done. I could be swimming right now. Or flossing. Or digging a big, pointless pit. Anything else. Twitter, for the past five years, has been a machine where I put in unpaid work and tension headaches come out. I write jokes there for free. I post political commentary for free. I answer questions for free. I teach feminism 101 for free. Off Twitter, these are all things by which I make my living – in fact, they comprise the totality of my income. But on Twitter, I do them pro bono and, in return, I am micromanaged in real time by strangers; neo-Nazis mine my personal life for vulnerabilities to exploit; and men enjoy unfettered, direct access to my brain so they can inform me, for the thousandth time, that they would gladly rape me if I weren’t so fat. I talk back and I am “feeding the trolls”. I say nothing and the harassment escalates. I report threats and I am a “censor”. I use mass-blocking tools to curb abuse and I am abused further for blocking “unfairly”. I have to conclude, after half a decade of troubleshooting, that it may simply be impossible to make this platform usable for anyone but trolls, robots and dictators. Surprisingly, none of that is the reason I left. I still loved Twitter – the speed of information, the breadth of analysis, the jokes, the gifs, the fortifying albeit intermittent solidarity, the chance to vet your instincts against those of people much smarter and better informed than you. Every day, people on Twitter – particularly people of colour, trans activists, disabled activists and sex workers – taught me how to be a better person and a better neighbour, a gift they persisted in dispensing even (always) at great personal cost. I still believe, at least in the rear-view mirror, in Twitter’s importance as a democratising force – facilitating direct, transparent access between the disempowered and the powerful, the marginalised and the ignorant. But I’m leaving anyway, for a while. I hate to disappoint anyone, but the breaking point for me wasn’t the trolls themselves (if I have learned anything from the dark side of Twitter, it is how to feel nothing when a frog calls you a cunt) – it was the global repercussions of Twitter’s refusal to stop them. The white supremacist, anti-feminist, isolationist, transphobic “alt-right” movement has been beta-testing its propaganda and intimidation machine on marginalised Twitter communities for years now – how much hate speech will bystanders ignore? When will Twitter intervene and start protecting its users? – and discovered, to its leering delight, that the limit did not exist. No one cared. Twitter abuse was a grand-scale normalisation project, disseminating libel and disinformation, muddying long-held cultural givens such as “racism is bad” and “sexual assault is bad” and “lying is bad” and “authoritarianism is bad”, and ultimately greasing the wheels for Donald Trump’s ascendance to the US presidency. Twitter executives did nothing. On 29 December, Twitter CEO Jack Dorsey tweeted: “What’s the most important thing you want to see Twitter improve or create in 2017?” One user responded: “Comprehensive plan for getting rid of the Nazis.” “We’ve been working on our policies and controls,” Dorsey replied. “What’s the next most critical thing?” Oh, what’s our second-highest priority after Nazis? I’d say No 2 is also Nazis. And No 3. In fact, you can just go ahead and slide “Nazis” into the top 100 spots. Get back to me when your website isn’t a roiling rat-king of Nazis. Nazis are bad, you see? Trump uses his Twitter account to set hate mobs on private citizens, attempt to silence journalists who write unfavourably about him, lie to the American people and bulldoze complex diplomatic relationships with other world powers. I quit Twitter because it feels unconscionable to be a part of it – to generate revenue for it, participate in its profoundly broken culture and lend my name to its legitimacy. Twitter is home to a wealth of fantastic anti-Trump organising, as well, but I’m personally weary of feeling hostage to a platform that has treated me and the people I care about so poorly. We can do good work elsewhere. I’m pretty sure “ushered in kleptocracy” would be a dealbreaker for any other company that wanted my business. If my gynaecologist regularly hosted neo-Nazi rallies in the exam room, I would find someone else to swab my cervix. If I found out my favourite coffee shop was even remotely complicit in the third world war, I would – bare minimum – switch coffee shops; I might give up coffee altogether. Apparently that sentiment is in the air because, as I was writing this column, I came across a post by my friend Lauren Hoffman, a writer for Vulture and Cosmopolitan: “I’ve made many real/good friends on Twitter but I guess if I met all my friends working at, like, the mall and the mall became a tacit endorsement of fascism I would keep the friends but stop going to the mall.” Keep the friends. Ditch the mall.

The Real Name Fallacy

$
0
0

By J.Nathan Matias People often say that online behavior would improve if every comment system forced people to use their real names. It sounds like it should be true – surely nobody would say mean things if they faced consequences for their actions? Yet the balance of experimental evidence over the past thirty years suggests that this is not the case. Not only would removing anonymity fail to consistently improve online community behavior – forcing real names in online communities could also increase discrimination and worsen harassment. We need to change our entire approach to the question. Our concerns about anonymity are overly-simplistic; system design can’t solve social problems without actual social change. The idea that anonymity is the real problem with the internet is based in part on misreadings of theories formed more than thirty years ago. In the early 1980s, many executives were unsure if they should allow employees to use computers and email. Managers worried that allowing employees to communicate across the company would enable labor organizing, conflict, and inefficiency by replacing formal corporate communication with informal digital conversations. As companies debated email, a group of social psychologists led by Sara Kiesler published experiments and speculations on the effects of “computer-mediated communication” in teams. Their articles inspired decades of valuable research and offered an early popular argument that anonymity might be a source of social problems online. In one experiment, the researchers asked computer science students who were complete strangers to make group decisions about career advice. They hosted deliberations around a table, through anonymous text chat, or through chat messages that displayed names. They also compared real-time chat to email. They found that while online decisions were more equitable, the decisions also took longer. Students also used more swear words and insults in chat conversations on average. But the researchers did not find a difference between the anonymous and non-anonymous groups [19]. Writing about unanswered questions for future research, Kiesler speculated in 1984 that since computers included less information on social context, online communications might increase social conflict and disputes with employers [13]. As Kiesler’s speculations became cited thousands of times, her call for more research was often taken as scientific fact. Her later, correlational findings were also misinterpreted as true effects [20]. Along the way, Kiesler’s nuanced appeal for changes in social norms was lost and two misconceptions became common: (a) social problems could be attributed to the design of computer systems (b) anonymity is to blame. These ideas aren’t reflected in the research. In 2016, a systematic review of 16 lab studies by Guanxiong Huang and Kang Li of Michigan State University found that on average, people are actually more sensitive to group norms when they are less identifiable to others [11]. While some non-causal studies have found associations between anonymity and disinhibited behavior, this correlation probably results from the technology choices of people who are already intending conflict or harm [12]. Under lab conditions, people do behave somewhat differently in conversations under different kinds of social identifiability, something psychologists call a “deindividuation” effect. Despite the experimental evidence, the misconception of online anonymity as a primary cause of social problems has stuck. Since the 1980s, anonymity has become an easy villain to blame for whatever fear people hold about social technology, even though lab experiments now point in a different direction. Beyond the lab, what else does research tell us about information disclosure and online behavior? Roughly half of US adult victims of online harassment already know their attacker, according a nationally-representative study by Pew’s Maeve Duggan in 2014 [6]. The study covered a range of behaviors from name calling to threats and domestic abuse. Even if harassment related to protected identities could be “solved” in one effort to move to ‘real names’, more than half of US harassment victims, over 16 million adults, would be unaffected. Conflict, harassment, and discrimination are social and cultural problems, not just online community problems. In societies including the US where violence and mistreatment of women, people of color, and marginalized people is common, we can expect similar problems in people’s digital interactions [1]. Lab and field experiments continue to show the role that social norms play in shaping individual behavior; if the norms favor harassment and conflict, people will be more likely to follow. While most research and design focuses on changing the behavior of individuals, we may achieve better results by focusing on changing climates of conflict and prejudice [16]. Revealing personal information exposes people to greater levels of harassment and discrimination. While there is no conclusive evidence that displaying names and identities will reliably reduce social problems, many studies have documented the problems it creates. When people’s names and photos are shown on a platform, people who provide a service to them – drivers, hosts, buyers – reject transactions from people of color and charge them more [8]. Revealing marital status on DonorsChoose caused donors give less to students with women teachers, in fields where women were a minority [18]. Gender- and race-based harassment are only possible if people know a person’s gender and/or race, and real names often give strong indications around both of these categories. Requiring people to disclose that information forces those risks upon them. Companies that store personal information for business purposes also expose people to potentially serious risks, especially when that information is leaked. In the early 2010s, poorly-researched narratives about the effects of anonymity led to conflicts over real-name policies known as “Nymwars.” This provided the justification for more advanced advertising-based business models to develop, which collect more of people’s personal information in the name of reducing online harm. Several high-profile hackings of websites have revealed the risks involved in trusting companies with your personal information. We also have to better understand if there is a trade-off between privacy and resources for public safety. Since platforms that collect more personal information have high advertising revenues, they can hire hundreds of staff to work on online safety. Paradoxically, platforms that protect people’s identities have fewer resources for protecting users. Since it’s not yet possible to compare rates of harassment between platforms, we cannot know which approach works best on balance. It’s not just for trolls: identity protections are often the first line of defense for people who face serious risks online. According to a US nationally-representative report by the Data & Society Institute, 43% of online harassment victims have changed their contact information and 26% disconnected from online networks or devices to to protect themselves [15]. When people do withdraw, they are often disconnected from the networks of support they need to survive harassment. Pseudonymity is a common protective measure. One study on the reddit platform found that women, who are more likely to receive harassment, also use multiple pseudonymous identities at greater rates than men [14]. Requirements of so-called “real names” misunderstand how people manage identity across multiple social contexts, exposing vulnerable people to risks. In the book It’s Complicated, danah boyd shares what she learned by spending time with American teenagers, who commonly manage multiple nickname-based Facebook accounts for different social contexts [24]. Requiring a single online identity can collapse those contexts in embarrassing or damaging ways. In one story, boyd describes a college admissions officer who considered rejecting a black applicant after seeing gang symbols on the student’s social media page. The admissions officer hadn’t considered that the symbols might not have revealed the student’s intrinsic character; posting them might have been a way to survive in a risky situation. People who are exploring LGBTQ identities often manage multiple accounts to prevent disastrous collapses of context, safety practices that some platforms disallow [7]. Clear social norms can reduce problems even when people’s names and other identifying information aren’t visible. Social norms are our beliefs about what other people think is acceptable, and norms aren’t de-activated by anonymity. We learn them by observing other people’s behavior and being told what’s expected [2]. Earlier this year, I supported a 14-million-subscriber pseudonymous community to test the effect of rule-postings on newcomer behavior. In preliminary results, we found that posting the rules to the top of a discussion caused first-time commenters to follow the rules 7 percentage points more often on average, from 75% to 82%. People sometimes reveal their identities during conflicts in order to increase their influence and gain approval from others on their side. News comments, algorithmic trends, and other popular conversations often become networked battlegrounds, connected to existing conflict and discussions in other places online. Rather than fresh discussions whose norms you can establish, these conversations attract people who already strongly identify with a position and behavior elsewhere, which means that these large-scale struggles are very different from the small, decision-making meetings tested in anonymity lab experiments. Networks of “counterpublics” are common in democracies, where contention is a basic part of the political process [27]. This means that when people with specific goals try to reframe the politics of a conversation, they may gain more influence by revealing their pre-existing social status [29]. For example, in high-stakes discussions like government petitions, one case study from Germany found that aggressive commenters were more likely to reveal their identity than stay anonymous, perhaps in hopes that the comments would be more influential [30]. Abusive communities and hate groups do sometimes attempt to protect their identities, especially in cultures that legally protect groups while socially sanctioning them. But many hate groups operate openly in the attempt to seek legitimacy [4]. Even in pseudonymous settings, illegal activity can often be traced back to the actors involved, and companies can be compelled by courts to share user information, in the few jurisdictions with responsive law enforcement. Yet law is reactive and cannot respond to escalating risks until something happens. In pseudonymous communities that organize to harm others, social norms are no help because they encourage prejudice and conflict. Until people in those groups break the law, the only people capable of intervening are courageous dissenters and platform operators [3]. Advocates of real-name policies understand the profound value of working on preventing problems, even if the balance of research does not support their beliefs. Designers can become seduced by the technology challenges of detecting and responding to problems; we need to stop playing defense. Designers need to see beyond cultural assumptions. Many of the lab experiments on “flaming,” “aggression,” and anonymity were conducted among privileged, well-educated people in institutions with formal policies and norms. Such people often believe that problem behaviors are non-normative. But prejudice and conflict are common challenges that many people face every day, problems that are socially reinforced by community and societal norms. Any designer who fails to recognize these challenges could unleash more problems than they solve. Designers need to acknowledge that design cannot solve harassment and other social problems on its own. Preventing problems and protecting victims is much harder without the help of platforms, designers, and their data science teams. Yes, some design features do expose people to greater risks, and some kinds of nudges can work when social norms line up. But social change at any scale takes people, and we need to apply the similar depth of thought and resources to social norms as we do to design. Finally, designers need to commit to testing the outcomes of efforts at preventing and responding to social problems. These are big problems, and addressing them is extremely important. The history of social technology is littered with good ideas that failed for years before anyone noticed. The idea of removing anonymity was on the surface a good idea, but published research from the field and the lab have shown its ineffectiveness. By systematically evaluating your design and social interventions, you too can add to public knowledge on what works, and increase the likelihood that we can learn from our mistakes and build better systems. Do you agree? Disagree? Have you tried real names/pseudonyms/anonymity on your site? Go here to discuss this article in our community. J. Nathan Matias is a PhD candidate at the MIT Media Lab Center for Civic Media and an affiliate at the Berkman-Klein Center at Harvard. He conducts independent, public interest research on flourishing, fair, and safe participation online. Beyond anonymity, if you are interested to learn more about what to do about social problems online, check out the online harassment resource guide to academic research, the list of resources at the FemTechNet Center for Solutions to Online Violence, and a report I facilitated on high impact questions and opportunities for online harassment research and action. See also my recent article on the role of field experiments to monitor, understand, and establish social justice online. Sarah Banet-Weiser and Kate M. Miltner. #MasculinitySoFragile: culture, structure, and networked misogyny. Feminist Media Studies, 16(1):171-174, January 2016. Robert B. Cialdini, Carl A. Kallgren, and Raymond R. Reno. A focus theory of normative conduct: A theoretical refinement and reevaluation of the role of norms in human behavior. Advances in experimental social psychology, 24(20):1-243, 1991. Danielle Keats Citron and Helen L. Norton Intermediaries and hate speech: Fostering digital citizenship for our information age Boston University Law Review, 91:1435, 2011. Jessie Daniels. Cyber racism: White supremacy online and the new attack on civil rights. Rowman & Littlefield Publishers, 2009. Jennifer L. Doleac and Luke CD Stein. The visible hand: Race and online market outcomes. The Economic Journal, 123(572):F469-F492, 2013. Maeve Duggan. Online Harassment, Pew Research, October 2014. Stefanie Duguay. “He has a way gayer Facebook than I do” : investigating sexual identity disclosure and context collapse on a social networking site. New Media and Society, September 2014. Benjamin G. Edelman, Michael Luca, and Dan Svirsky. Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment. SSRN Scholarly Paper ID 2701902, Social Science Research Network, Rochester, NY, January 2016. Yanbo Ge, Christopher R. Knittel, Don MacKenzie, and Stephen Zoepf. Racial and gender discrimination in transportation network companies. Technical report, National Bureau of Economic Research, 2016. Arlie Russell Hochschild. The Managed Heart: Commercialization of Human Feeling. University of California Press, Berkeley, third edition, updated with a new preface edition edition, 1983. Guanxiong Huang and Kang Li. The Effect of Anonymity on Conformity to Group Norms in Online Contexts: A Meta-Analysis. International Journal of Communication, 10(0):18, January 2016. Adam N. Joinson. Disinhibition and the Internet. Psychology and the Internet: Intrapersonal, interpersonal, and transpersonal implications, pages 75-92, 2007. Sara Kiesler, Jane Siegel, and Timothy W. McGuire. Social psychological aspects of computer-mediated communication. American Psychologist, 39(10):1123-1134, 1984. Alex Leavitt. This is a Throwaway Account: Temporary Technical Identities and Perceptions of Anonymity in a Massive Online Community. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 317-327. ACM, 2015. Amanda Lenhart, Michelle Ybarra, Kathryn Zickuhr, and Myeshia Prive-Feeney. Online Harassment, Digital Abuse, and Cyberstalking in America. Report, Data & Society Institute, November 2016. Elizabeth Levy Paluck. The dominance of the individual in intergroup relations research: Understanding social change requires psychological theories of collective and structural phenomena. Behavioral and Brain Sciences, 35(06):443-444, 2012. Elizabeth Levy Paluck and Donald P. Green. Prejudice reduction: What works? A review and assessment of research and practice. Annual review of psychology, 60:339-367, 2009. Jason Radford. Architectures of Virtual Decision-Making: The Emergence of Gender Discrimination on a Crowdfunding Website. arXiv preprint arXiv:1406.7550, 2014. Jane Siegel, Vitaly Dubrovsky, Sara Kiesler, and Timothy W. McGuire. Group processes in computer-mediated communication. Organizational behavior and human decision processes, 37(2):157-187, 1983. Lee Sproull and Sara Kiesler. Reducing Social Context Cues: Electronic Mail in Organizational Communication. Management Science, 32(11):1492-1512, November 1986. Tiziana Terranova. Free labor: Producing culture for the digital economy. Social text, 18(2):33-58, 2000. Kathi Weeks. Life within and against work: Affective labor, feminist critique, and post-Fordist politics. Ephemera, 7(1):233-249, 2007. JoAnne Yates. Control through communication: The rise of system in American management, volume 6. JHU Press, 1993. danah boyd. It’s complicated: The social lives of networked teens. Yale University Press, 2014. Nancy Fraser. Rethinking the public sphere: A contribution to the critique of actually existing democracy. Social text, (25/26):56-80, 1990. Catherine R. Squires. Rethinking the black public sphere: An alternative vocabulary for multiple public spheres. Communication theory, 12(4):446-468, 2002. Michael Warner. Publics and counterpublics. Public culture, 14(1):49-90, 2002. Christian von Sikorski. The Effects of Reader Comments on the Perception of Personalized Scandals: Exploring the Roles of Comment Valence and Commenters’ Social Status. International Journal of Communication, 10:22, 2016. Robert D. Benford and David A. Snow. Framing processes and social movements: An overview and assessment.  Annual review of sociology, pages 611-639, 2000. Katja Rost, Lea Stahel, and Bruno S. Frey. Digital social norm enforcement: Online firestorms in social media. PLoS one, 11(6):e0155923, 2016. Photo by werner22brigitte, CC0-Public Domain
Viewing all 3882 articles
Browse latest View live