Quantcast
Channel: Popular Science
Viewing all 22782 articles
Browse latest View live

Supercomputer Takes 40 Minutes To Create Super-Detailed Model Of 1 Second Of Brain Activity

$
0
0

Compute This
Japan's K computer, the world's fourth-most-powerful supercomputer.
RIKEN
Futurists have long talked about the day when computers become as powerful and versatile as the human brain. A recent simulation shows that day is not exactly imminent. In one of the most accurate simulations of the human brain to date, a Japanese supercomputer modeled one second of one percent of human brain activity, a task that took 40 minutes, according to The Telegraph. 

The simulation was carried out by Japan's K computer, the world's fourth most powerful supercomputer, and replicated a network consisting of 1.73 billion neurons. There have been larger simulations before, although they took longer. IBM's SyNAPSE, for example, modeled 530 billion neurons in late 2012, as Popular Sciencepointed out in April of last year. That's more than the total number of neurons in a human's brain, which is about 86 billion neurons on average. That project took several hours.

The project allowed researchers to gather "invaluable knowledge that will guide the construction of new simulation software," The Telegraph reported. But it didn't really uncover anything new about the brain--the task was actually meant to " test the limits of simulation technology and the capabilities of the K computer."

Some have suggested that exascale computers, which could carry out one quintillion "floating point" operations per second (a common way of measuring computing speed),would have the same processing power as the human brain. No such computer exists, but Intel has said that it aims to create such a computer by 2018. But other scientists don't think this will be possible before 2020 at the earliest.


    







Human Ancestors Chewed Bulbs And Worms, Study Finds

$
0
0

Nutcracker Man 2
The skull of hominin Olduvai Hominid 5, a "Nutcracker Man," is a famous early human fossil.
Courtesy Donald C Johanson

With his big flat molar teeth and powerful jaws, the hominin Paranthropus boisei has long earned a nickname as the “Nutcracker Man.” But for many years, archaeologists debated what this human relative, who roamed East Africa between about 2.4 million to 1.4 million years ago, actually ate. Previous isotope analysis suggests a diet rich in C4 plants—plants that produce compounds with four carbon atoms during photosynthesis—such as grasses and sedges. But some scientists questioned whether such low-quality foods were nutritious enough for the hominin’s large brain.

Now, Oxford University archaeologist Gabriele Macho may have an answer. It’s impossible to observe what the hominins consumed, so Macho studied the diets of present-day baboons in Amboseli National Park in Kenya, who live in an environment similar to what the Nutcracker Men once inhabited.

Macho found the baboons eat large amounts of bulbs such as tiger nuts, a C4 plant containing the minerals, vitamins and fatty acids especially important for the hominin brain. The study, published last week in the journal PLOS ONE, would also explain the wear and tear commonly found on the Nutcracker Men’s teeth, as the starch-rich tiger nuts are highly abrasive in an unheated state.

The hominins also supplemented their diet with fruits and invertebrates, the study says. Tiger nuts, which are still used for grinding down and baking in many countries, would have been relatively easy for the Nutcracker Men to find.

“This is why these hominins were able to survive for around one million years because they could successfully forage—even through periods of climatic change,” Macho said in a statement.

Nutcracker Man
The teeth of a "Nutcracker Man" skull show marks of wear and tear.
Courtesy Donald C Johanson

    






Building A Social Network Of Crime

$
0
0

Organized Crime
Michelle Mruk

In Chicago's gang-embattled South Side, a shooting can incite swift retaliation, which spawns even further violence. That’s what may have happened in September in Cornell Square Park when gunmen opened fire on a basketball court just hours after one of them had been wounded in a separate incident. When officers can’t connect the dots between people fast enough, it’s nearly impossible to get ahead of crime. In Cornell Square Park, the fallout wounded 13, including a three-year-old boy.

Gang violence is generally not random. It’s usually related to territorial disputes or personal rifts—that is, to geographic, cultural, and social connections. Some police departments have had marginal success monitoring social networks like Facebook for clues about where bloodshed might erupt next. But a new kind of software being used in Chicago can turn an entire database of arrest records into visual portrayals of real-life social networks, which may someday allow cops to quickly identify a person’s friends and enemies, and hopefully where violence is likely to go down.

Major Paulo Shakarian, an assistant professor at the U.S. Military Academy at West Point, had been developing software to better understand networks of insurgents abroad. It had been tested in Afghanistan but not widely deployed. In 2012, Chicago police officer John Bertetto came across one of Shakarian’s papers and called up West Point with a question: Could social-network analysis help Chicago? Shakarian, working with other professors and a team of cadets, decided to find out. 

The Fallout
Police officers investigate Cornell Square Park in Chicago, where gunmen wounded 13 people in September.
Scott Olson/Getty Images

They created the Organizational, Relationship, and Contact Analyzer (ORCA), which in seconds generates networks that take people with whiteboards and unstructured databases hours or days to produce. Last summer, West Point cadets headed to Chicago to put ORCA to the test. The software combed through three years of anonymized arrests (5,418 total) from one district, turning them into social-connection visualizations and reports on individuals.

ORCA started by linking people who had been arrested together—the most objective way a record shows that people have, at the very least, been at the same place at the same time. From there, it categorized those who had admitted a gang affiliation. And then, based on social links, it gave the others a numerical probability of a particular affiliation. ORCA further analyzed clustered nodes within the network to identify groups and subgroups—a crew occupying a street corner, for example. By zeroing in on people connected across many groups and subgroups, ORCA singled out the most influential ones.

For the most part, ORCA verified what the Chicago Police Department already knew. “Early indications are that it’s accurately showing networks that we’re aware of,” says Chief Debra Kirby, who heads the department’s Bureau of Organizational Develop­ment. And officers have used it to generate some useful leads. The West Point team continues to develop the program (now called GANG, “GANG Analyzes Networks and Geography”) and will return to Chicago this year. Future iterations may integrate geolocation data or intelligence from informants. An Italian research group has expressed interest in using ORCA to fight organized crime in Italy. And the U.S. Department of Defense is still very much involved. If the software can map the activities of mafiosi and South Side gangs, it could someday be a push-button means to decode bad intentions all around the world.

The Social Network

ORCA software turns a database of arrest records into useful social networks. Here’s an analysis of one particular gang. Each circle represents a person. The more arrests, the larger the circle. Lines connect two people who have been arrested together. ORCA created likely subgroups within the gang, represented by different colors.

 

 

 

This article originally appeared in the January 2014 issue of Popular Science.


    






A Malaysian Language Describes Smells As Precisely As English Describes Colors

$
0
0

photo of a man sniffing a durian
Sniff Test
A man sniffs a durian fruit before deciding to buy it.

You often get a good idea of what things will taste like from a restaurant menu's descriptions. But try doing the same with descriptions of perfumes in catalogs, and you'll have a bit more trouble. What exactly is "steamy amber" supposed to smell like?

Things don't have to be this way, however. Although years of studies in Western societies have found that people are really bad at describing smells, a new study of a language found in Malaysia suggests the deficiency is cultural, not biological. Speakers of Jahai and other related languages have precise words for different smells. They're equivalent to the range of words—red, blue, pink—that English has for colors, according to a study done by two linguists from the Netherlands.

plʔεŋ is the "bloody smell that attracts tigers."

In a series of experiments, the Dutch linguists found that Jahai speakers are able to consistently describe smells in a way that ordinary English-speakers—not smell experts, such as perfume industry people—can't. Presumably, this means that when they're talking to each other about smells, Jahai speakers can get an immediate, accurate idea of what their friends are describing, which is pretty cool.

Also interesting to think about: The linguists, Asifa Majid of Radboud University and Niclas Burenhul of Lund Universty in Sweden, discussed in their paper how Western researchers had assumed there was something universally ineffable about odors… when really they just hadn't looked past the languages in their own neighborhoods.

Check out some of the words Jahai speakers have for smells. Note that these are abstract words made just for describing these odors. That contrasts with English smell descriptions, which often compare smells with things, using phrases such as "smells like bananas" or "smells like a wet dog."

tpɨt: the smell of certain flowers and ripe fruits. Perfume, soap, Aquilaria wood, durian and bearcats have a tpɨt smell.

Cŋεs: petrol, smoke, bat droppings, bat caves, some species of millipedes, wild ginger roots and wild mango wood all have this smell.

plʔεŋ: this means "a bloody smell that attracts tigers." Squirrel blood and crushed head lice (!!) have it. It is distinct from pʔih, which is the smell that blood, raw fish and raw meat have.

In their experiments, Radboud and Burenhul asked both native Jahai speakers and native English speakers to name smells on scratch-'n'-sniff cards and colors on chips. They compared each person's descriptions of the smells and colors with his compatriots'.

They found that Jahai speakers were equally likely to use the same words as other Jahai speakers to describe both colors and odors. English speakers, on the other hand, usually used the same words for colors, but used wildly different words from each other for smells. How about this English speaker's description of the smell of cinnamon? "I don't know how to say that, I have tasted that gum like Big Red or something tastes like, what do I want to say? I can't get the word. Jesus it's like that gum smell like something like Big Red. Can I say that? Ok. Big Red. Big Red gum."

English speakers also spent more words on describing smells, suggesting they were having a hard time putting things into words.

Descriptions of smells are vital to Jahai life, Radboud and Burenhul report. For example, in villages in which residents forage primarily, it's important not to bring home animals that have the smell that attracts tigers. The 10 Jahai men Radboud and Burenhul recruited for their study were all foragers, although they also saw modern stuff all the time.

The study appeared in the journal Cognition.


    






Boost Your Sense Of Touch With Ultrasonic Brain Stimulation

$
0
0

Ultrasound
William "Jamie" Tyler of the Virginia Tech Carilion Research Institute studies the effects of ultrasound on the brain region responsible for processing tactile sensory inputs.
James Stroup/Virginia Tech

Scientists have been doing some amazing things to our brains with ultrasound, like breaking up blood clots, boosting alertness for soldiers, and even connecting the human mind with a rat’s. Now, they have shown that ultrasound can sharpen our tactile perception, too.

Researchers at the Virginia Tech Carilion Research Institute have demonstrated that by directing an ultrasound beam to a specific region of the brain, they can boost participants’ ability to detect differences in sensations.

To test their theory, the research team hooked up the volunteers with EEG-monitoring devices and placed small electrodes on their wrists. Just before buzzing their hands with the electrodes, the team ultrasound-beamed the volunteers’ brains in a region responsible for processing tactile stimulation. They then gave the participants two classic neurological tests: the first measured the ability to distinguish whether two nearby objects touching the skin are two distinct points or one; the second measured sensitivity to the frequency of a chain of air puffs.

The result was a surprise. The ultrasound actually weakened participants’ brainwaves associated with tactile stimulation, but improved their performance on the tests. How did that happen? The researchers speculate that the specific ultrasound waveform they used may have affected the balance between neurons that excite and neurons that inhibit the processing of sensory stimuli in the targeted brain region.

And it affects a very specific region of the brain—move the beam one centimeter left or right, and the effect disappears.

“That means we can use ultrasound to target an area of the brain as small as the size of an M&M,” William “Jamie” Tyler, who led the study, said in a statement. This specificity could make ultrasound a better technology for non-invasive brain stimulation than two other leading candidates, magnets and electric currents, the researchers said in the study.

Understanding how exactly it works can help scientists make more precise maps of our brain connections and, yes, link our minds with rats’—or our fellow human beings’.


    






How To Calculate Your Exact Commute Times In Rain And Snow

$
0
0

photo of the A102 roadway in London during morning rush hour
London Traffic
Copyright Stephen Craven, licensed for reuse under CC BY-SA 2.0

Seeing some snow outside? Better budget in 7.6 percent more travel time for your morning commute.

In a new study, a team of civil engineers calculated incredibly precise numbers for how weather affects car commute times in the Greater London area. Here's the scoop:

PrecipitationTravel Time Increase
Light rain0.1-2.1%
Moderate rain1.5-3.8%
Heavy rain4.0-6.0%
Light snow5.5-7.6%
Heavy snow7.4-11.4%

The engineers also studied temperatures, but found they do not affect commute times. Thanks, engineers.

The study used some modern tech to count cars. The researchers, a team of three from University College London, grabbed data from government-owned, license plate-reading cameras installed all over the city. That let them track individual cars as they began and ended trips.

These findings could be useful to urban planners, the researchers wrote in a paper they published in the Journal of Transport Geography. In fact, for years, other engineers have performed similar studies in other cities. Those studies focused on less dense, and thus easier to count, cities than London. It turns out that the degree to which people slow down in rain and snow depends on the city, though most studies, like this London one, found that heavier precipitation means slower commutes. 


    






What To Know About The Net Neutrality Ruling

$
0
0

Graffiti "Internet" on the wall in Vodice, Croatia.
Ronald Eikelenboom

The DC Circuit Court has issued a ruling in Verizon v. FCC that is likely the shape the very nature of the internet. At the heart of the case is how the companies that provide internet to consumers can control that flow of information. In 2010, the Federal Communications Commission put forth an order that required "network neutrality," meaning that internet providers had to treat all packets delivered on the internet as equal. Today, a court ruled that the FCC lacks the authority to impose net neutrality on high-speed internet providers.

Without a net neutrality requirement, service providers could turn internet connections into a toll road, charging companies like Netflix or Google extra money to deliver their packets with a higher priority than others. This, in turn, could also slow down the loading of sites that couldn't or refused to pay. The biggest fear is a "cable-ization" of the internet, where certain internet providers only provide service to certain sites, in much the way that cable channels are packaged and sold separately.

To understand the implications of today's ruling, I spoke with two experts on net neutrality and free speech online. James Grimmelmann is a professor of law at the University of Maryland and directs the University's Intellectual Property Program. Josh Levy is the internet campaign director at Free Press, which has already put out a strong statement on the court's ruling in Verizon v. FCC.

What were the stakes in Verizon v. FCC?

James Grimmelmann: Whether the FCC's net neutrality rules — which prevented ISPs like Verizon from discriminating against particular websites or services — were valid. With the anti-discrimination rules struck down, Verizon is free to tell a Netflix, a Google, or a Facebook, "We won't let our customers connect to you unless you also pay us."

Josh Levy: The stakes were no less than the future of the open internet. The FCC’s rules, while imperfect, provided some protections for internet users. With those rules thrown out, ISPs now have the power to block any online content they like. This is the opposite of the open internet. It’s a dark day for internet users.

Do you expect the case to be appealed higher, and if so, how do you see the Supreme Court ruling on it?

"With the rules thrown out, ISPs now have the power to block any online content they like."

Grimmelmann: The FCC could ask the Supreme Court to review the case. But since the court today outlined a route the FCC could follow if it wanted to impose neutrality rules — "reclassifying" broadband service — the Supreme Court would not be likely to step in.

Levy: We’re not sure. Right now, the FCC can fix this problem by reasserting and restoring its authority over broadband connections. Today’s court decision charted a clear path for doing so.

Grimmelmann, you tweeted that the FCC got expanded authority under § 706. In layman's terms, what does that do?

Grimmelmann In § 706, Congress asked the FCC to promote broadband deployment; the FCC can now use this mission to make regulations that encourage deployment. On Twitter, one of my followers suggested that this authority could be used, for example, to allow cities to build their own broadband networks.

Was this a total victory for Verizon, or did the FCC gain anything in the ruling?

Levy: The FCC didn’t gain anything. It lost its ability to stop ISPs from blocking or discriminating against content.

Levy, you tweeted that the rules were struck down because the FCC passed them the wrong way. What is the right way, and what would secure the rules from challenges in the future?

Levy: The right way is for the FCC to reclassify broadband under Title 2 of the Communications Act (rather than under Title 1, which it did before). This is the legally appropriate way for the FCC to assert its authority, and is the path suggested in a number of court decisions, including today’s.

If ISPs aren't a common carrier, what are they? This quote from the ruling:

We think it obvious that the Commission would violate the Communications Act were it to regulate broadband providers as common carriers. Given the Commission’s still-binding decision to classify broadband providers not as providers of “telecommunications services” but instead as providers of “information services,” see supra at 9–10, such treatment would run afoul of section 153(51)

is a heck of a statement. Could ISPs be re-labeled common carriers by law?

"Economically, it shifts power and money towards ISPs and away from websites and internet services."

Grimmelmann: The FCC has, in previous proceedings, classified ISPs as telecommunications services. It could go back and reclassify them as information services, and there's substantial judicial support that the FCC would be within its powers to do so. But that would also subject ISPs to various other regulatory requirements, and it would be politically controversial.

Levy: Yes, easily. The FCC could, today, reclassify broadband as a “telecommunications” service under Title 2 of the Communications Act and thus reclassify internet providers as common carriers.

Much has been made of comparisons to cable vs broadcast TV in this. Does this ruling lead directly to the "cable-ization" of internet communication?

Grimmelmann: Economically, it shifts power and money towards ISPs and away from websites and internet services. But, at least in the short run, the internet will still look like the internet, rather than the much narrower and intensively programmed world of cable.

Levy: We believe that it does. Without the FCC protecting internet users, ISPs will be free to charge extra for — or block outright — Facebook or Netflix in the same way that cable TV providers offer or don’t offer FX or ESPN.


    






Kanye West Sues The Makers Of 'Coinye' Cryptocurrency

$
0
0

Despite his love for science fiction, we only too rarely get an opportunity to post about Kanye West here at Popular Science. So when that opportunity arises--when, say, West is suing a knockoff version of bitcoin--we wholeheartedly jump on board. 

A not-yet-launched cryptocurrency known as "Coinye" is getting slapped with a lawsuit from West and his legal team. Not hard to see why: the currency's site prominently features a bespectacled West. This, according to the lawsuit filed today in a Manhattan court (and embedded below for your viewing convenience), is very bad for Kanye West. "With each day that passes, Mr. West's reputation is irreparably harmed by the continued use of the COINYEWEST, COINYE and/or COYE marks in connection with Defendants' goods and services," the lawsuit says, in legalese that is priceless. 

The suit asks the court to shut down the Coinye operation before it takes off, and to award as-yet-unspecified damages to West for the harm done to his reputation. The suit cites tweets (also hilariously! the future is full of print lawsuits about internet stuff!) to prove there's some confusion as to whether Kanye was actually involved in the creation of the currency, which he was not, thank you, nerds. The creators of Coinye aren't currently known, so aren't named in the suit, but are believed by West's team to be uncovered as the suit moves forward in court.

So, yes, terrible news. Your get-rich-quick mortgage-payoff scheme will once again be winning the lottery.

 

 


    







Moving Cocktail Garnishes Harness The Power of Surface Tension

$
0
0

It may be rude to play with your food (or drink), but two new cocktail garnishes make it hard not to. Scientists have now developed a boat that zips across the surface of drinks, as well as a "flower" that sops up a tiny, sip-sized dollop of the beverage for your palate-cleansing pleasure. Believe it or not, they are both powered entirely by magic.

Just kidding. Their secret lies in differences in surface tension, the cohesion between molecules that causes water to form droplets on glass, and which is disrupted by soapHere's what's going on with the boat, as explained by Chemical & Engineering News:

The boat works by the Marangoni effect, which some insects use to propel themselves across water. When the “fuel,” a high-proof alcohol such as Bacardi 151, leaks out of a notch in the boat, the difference in surface tension between it and the cocktail spirits gives the boat enough zip to speed around for up to two minutes.

The design of the flower "pipette," on the other hand, was inspired by a type of floating flower. Its geometry is made to pick up fluids drop by drop, and surface tension prevents the liquid from escaping the "petals." Both were designed by researchers at Massachusetts Institute of Technology last fall, in collaboration with chefs at Jose Andres' ThinkFoodGroup, in Washington, D.C. They are still in development but will soon be available at restaurants owned by the company.


    






Sponges Can Sneeze, May Have New Sensory Organ

$
0
0

Sponge, in repose
The freshwater sponge Ephydatia muelleri, used in the study.
Glen Elliott and Sally Leys
I don't know if you were aware, but sponges sneeze. That's a surprise, since sneezes result when nerve cells sense the presence of some sort of irritating, foreign particle. But sponges, one of the earliest evolving animals, aren't thought to possess a single sensory cell.

That appears to be wrong. 

"The sponge doesn't have a nervous system, so how can it respond to the environment with a sneeze the way another animal that does have a nervous system can?" asks Danielle Ludeman, a doctoral student in evolutionary biology at the University of Alberta, an co-author of a study describing the sponge-sneezes.

The answer, researchers suggest, is that sponges likely do have sensory cells, detecting irritants and drugs with their finger-like cilia, and use this information to contract and expel water. And that's nothing to sneeze at. "For a sponge to have a sensory organ is totally new," said Ludeman's advisor Sally Leys, in a statement. "This does not appear in a textbook; this doesn't appear in someone's concept of what sponges are permitted to have." 

In the study, the researchers demonstrated that cilia line the osculum, the central chamber where sponges get rid of wastes. This osculum, they write, may function as a sensory organ. Unlike the human variety, sponge-sneezes take about 30 to 45 minutes, during which time the sponges contract to expel water and then expand.

"This is a very exciting and comprehensive study that clearly demonstrates that sponges are more sophisticated," Gert Wörheide, a sponge-evolution expert from Germany's Ludwig-Maximilians University, told National Geographic.


    






Video: Female Monkeys Throw Stones To Attract Males

$
0
0

Throwing a stone
Tiago Falótico and Eduardo B. Ottoni / PLOS ONE
To signal their readiness to mate and get males' attention, some female capuchin monkeys in a Brazilian forest reserve have taken to throwing stones at the objects of their desire. It's the first time this type of behavior has been witnessed in the wild. To make a scientifically dubious cross-species reference, perhaps they have simply run out of other courtship ideas, like human men honking horns in this Seinfeld bit (at 1:45).

More typically, females signal their readiness to mate by pulling pouting faces, whining loudly or touching males and running away. But some female bearded capuchin monkeys in Serra da Capivara National Park have taken this more assertive approach. As the BBC reports

Unlike other monkeys, female capuchins do not have any physical indicators to show when they are at their most fertile or "proceptive". Without brightly colored, swollen genitals or strong smelling odors or liquids to communicate, the capuchins display they are ready to mate through their behavior. 

The stone-throwing was observed first in a group of three female monkeys, toward the peak of their "proceptive" phase, and later in another three females. The authors of the study, published in November in the journal PLOS ONE, wrote that this shows that behavior has likely been learned and passed on. 

Often times the rocks didn't actually hit the males, but in two cases, males hit with rocks ended up mating with the stone-slinging females. Hey, whatever works. 

 


    






Video: A Marine With A Prosthetic Hand Controlled By His Own Muscles

$
0
0

Testing the Arm
SSgt. James Sides, left, talks with Dr. Paul Pasquina, principal investigator on a new implantable device that can control a prosthetic limb with an amputee's own muscle.
Courtesy Uniformed Services University

Out on a routine reconnaissance mission in Afghanistan’s Helmand province, Marine Staff Sergeant James Sides reached out his right hand to grab the bomb. It was the ordnance disposal tech’s fifth deployment overseas, and his second to Afghanistan. But this time, July 15, 2012, the improvised explosive device detonated. Sides was blinded in his left eye and lost his right arm below the elbow.

After a long recovery at Walter Reed National Military Medical Center, Sides learned to use a prosthetic hand. Then 11 months later, he went back to the hospital — this time, for a surgical implant that could represent the future of prosthetics.

The implanted myoelectric sensors (IMES) in Sides’ right arm can read his muscles and bypass his mind, translating would-be movement into real movement. The IMES System, as its developers are calling it, could be the first implanted multi-channel controller for prosthetics. Sides is the first patient in an investigational device trial approved by the U.S. Food and Drug Administration.

“I have another hand now,” he says.

 

 

Inspired in large measure by veterans like Sides, many groups are working on smarter bionic limbs; you can read about some of them here. Touch Bionics makes a line of bionic hands and fingers that respond to a user’s muscle feedback, but they attach over the skin. DARPA’s Reliable Neural-Interface Technology (RE-NET) program bridges the gaps among nerves, muscles and the brain, allowing users to move prosthetics with their thoughts alone. This is promising technology, but it requires carefully re-innervating (re-wiring nerves) the residual muscles in a patient’s limb. The implant Sides received is simpler, and potentially easier for doctors and patients to adopt. The project is funded by the Alfred Mann Foundation for Scientific Research.

It uses the residual muscles in an amputee’s arm — which would normally control and command muscle movement down the hand — and picks up their signals with a half-dozen electrodes. The tiny platinum/iridium electrodes, about 0.66 inches long and a tenth of an inch wide, are embedded directly into the patient’s muscle. They are powered by magnetic induction, so there would be no need to swap batteries or plug them in — a crucial development in making them user-friendly, according to Dr. Paul Pasquina, principal investigator on the IMES system and former chief of orthopedics and rehabilitation at Walter Reed.

It translates muscle signals into hand action in less than 100 milliseconds. To Sides, it’s instantaneous: “I still close what I think is my hand,” he says. “I open my hand, and rotate it up and down; I close my fingers and the hand closes. It’s exactly as if I still had a hand. It’s pretty gnarly.”

In the video at the bottom, watch him take a swig of Red Bull, lift a heavy Dutch oven lid and sort tiny blocks into separate bins.

IMES implantable electrode
A new type of implantable myoelectric sensor can control a prosthetic limb by reading the user's residual muscle movements.
Courtesy Alfred E. Mann Foundation

While skin-connected smart prosthetics are more sophisticated than body-controlled, analog ones, they have two key drawbacks, according to Pasquina: Their limited range of motion and the skin barrier between the device and the muscle. Because myoelectric devices connect over the skin, the connection is inexact, which means patients have to learn to flex a different suite of muscles to properly activate the electrodes. “You have to reprogram your mind,” as Pasquina puts it. “We’re going after the muscles that are doing things they’ve been programmed to be able to do.”

What’s more, an external connection is easy to interrupt — reaching overhead, or getting wet, can dislodge the electrodes and render the prosthetic useless. Sides recalls sweating profusely one day while using his original on-the-skin myo-device.

“I would lose connectivity, and the hand would go berserk,” he says.

Maybe most importantly, external myoelectric devices can’t think seamlessly in three dimensions. For patients like Sides, that means moving the hand up or down or side to side. But when he wanted to move his thumb, he had to nudge it in the proper position using his left hand. Now, the device responds as if he was moving it himself in one fluid motion, he says. “It makes day-to-day life a lot easier.”

Right now, the system is designed for up to three simultaneous degrees of movement. Future systems will include up to 13 angles of motion and pre-programmed patterns, much like Touch Bionics’ i-Limb myoelectric hand. Pasquina said natural movement has been a major challenge for the DARPA project as well as his own research.

“We can create these arms, but how will the user control the arm, and integrate it as part of themselves?” he says. “It’s not just controlling an arm, it’s controlling my arm; ‘I want to control this and make it feel like it’s a part of me.’”

Someday, the sensors could be implanted immediately after a traumatic injury — right after an amputation and before a patient’s injury is even closed up. Further research will prove it can work in multiple people, Pasquina says. But that’s after several years of testing and trials monitored by the FDA.

“This is not a science experiment. This is something we want to influence the lives of our service members in a positive way,” he says.

 

 


    






This Phone Is An NSA-Free, Secret-Storing Black Box

$
0
0

 

 

Ah. Another day, another NSA spying revelation. Here, at least, is a potential alternative for people who'd like to keep their gadgets and their privacy. Meet Blackphone, a sleek smartphone that encrypts communications.

Blackphone, according to the creators, is an Android-based gadget that can make video and voice calls, as well as send texts and files--all the while blocking out prying eyes through a custom operating system called PrivatOS. The creators are still quiet on most of the details, but the phone does come with an impressive pedigree: a joint project from security company Silent Circle and phone company Geeksphone, the team includes Phil Zimmerman, the creator of Pretty Good Privacy (PGP), the popular encryption system for sub rosa talks. 

Still, we might have to wait until more info on the phone is released next month to know how well the phone works as, well, a phone. Although as you will see in the video here, there is a person dressed in all black and sunglasses using the phone. She probably has secrets! Perhaps they are being well-hidden!

[Blackphone]


    






The Editor's Letter From The February 2014 Issue Of Popular Science Magazine

$
0
0

Change Is the Only Constant
Paul Park Photography

Sometime around the sixth century BCE, the philosopher Heraclitus established a series of doctrines that upended the thinking of the day. By all accounts, Heraclitus was not an easy man to get along with. He derided Pythagoras and Homer as idiots and was referred to as the “Weeping Philosopher” because of his rather dim view of human nature. But for all his quirks of character, Heraclitus did strike upon one fundamental idea: “Nothing endures but change.”

In its 141 years, Popular Science has seen its share of change. That’s apparent in what we cover, of course. Our magazine has witnessed the birth of modern automobiles, spacecraft, nuclear power, computers, DNA sequencing, the Internet, mobile phones, and Candy Crush. But it’s also apparent in the magazine itself. The first issue of Popular Science, in May 1872, was a dense black-and-white journal—a far cry from what it’s become.

Unless you’re unfamiliar with the magazine, you probably noticed that this issue is quite different from previous ones. We have a new, more contemporary logo—the 24th in our history—and we have a new design, complete with new sections. “Feed” provides a window into Popular Science and serves as a venue for your commentary. “Now” is our home for today’s technology and science culture, while “Next” covers the ideas and innovations actively shaping the future. “Manual” is our reimagined DIY section packed with even more amazing projects, and “Ask Anything” is still the most entertaining Q&A in any magazine.

With so much transformation, it bears stating that certain things don’t change (sorry, Heraclitus). Popular Science has always provided a vision of the future and portraits of those making it, and we will continue to do so. We’re also similarly dedicated to explaining how science and technology impact the world around us right now, as I hope you’ll see in our cover story on the Winter Olympics and our investigation of the FBI’s new facial-recognition database

Before I sign off, there is one more change I should mention—and that’s me. This is my first issue as the editor in chief of Popular Science, making me the 21st editor of the magazine. Few titles can claim such a long lineage, and I’m proud and privileged to be a part of it. And I believe I come to it honestly. I started my career as a molecular biologist, and for the last three years, I served as this magazine’s executive editor, grappling day in and day out with the perpetually evolving world of science and technology. As I move forward in my new role, my job will be to identify and explain the forces poised to remake our planet and our lives. So let’s go ahead and embrace change in all its forms, for it’s only through transformation that the future is made. Somewhere, I’d like to think, the Weeping Philosopher is smiling.

Enjoy the issue.

Click here to read the February 2014 issue.


    






How One Man's One-Way Trip To Mars Is Dividing His Family

$
0
0

Mars One Concept
Bryan Versteeg and Mars One

Ken Sullivan has applied for a one-way trip to Mars, and, good news, he's made it past the first round of Mars One applicants! Hooray! Surely nothing could stand in his way.

Except his wife and four children, who are less than ecstatic about the idea. 

Jeez. Just can't let a guy have any fun.

From a report in the Salt Lake Tribune

It’s led to some difficult conversations around the dinner table.

"The question is do we get divorced now or get divorced later," she said Sunday night in their living room. "If I stand in the way of his dreams and passions, then we get divorced now, so I have to be supportive."

Oof. Yeah, I guess, although if a spouse told his partner, It's my dream to leave you and the kids through literally the most accelerated means possible, in fact rocketing off of this very planet, in the near future, that would perhaps be grounds for less dream-support. Maybe just let that one die.

But the real kicker are the children quoted in the article, one of whom delivers this line right out of some domestic science-fiction tragedy:

"I don’t like it, not at all," said Kaitlyn, 12, about the thought of her dad blasting off to Mars, never to return. "When he leaves, I’d have a way to talk to him, but I can’t ever see him in person again.

"It makes me sad because I’m going to miss him."

Sullivan's wife says they'll cross the divorce road if he actually makes it down from the current pool of applicants down to the final four. Anyway, at least he's being open about it, and not trying to pull My father said he was going out for cigarettes and then left for Mars.

[Salt Lake Tribune via Gawker]

 


    







Shape-Shifting Wing Design Prepares For Testing

$
0
0

For over a century, airplane wings have used flaps to alter their shape for better flight performance: extending to generate more climb during takeoff, tilting to stall and generate more breaking power during landings, and staying neutral during normal flight. Yet flaps, as discrete parts, are imperfect, letting air through gaps or catching more air than necessary during flight, and leading to inefficiencies, which in turn lead to higher fuel costs. FlexFoil, an appropriately named flexible wing technology by similarly subtly named company FlexSys, is a seamless flexible wing that can work like flaps but doesn't have the accompanying inefficiencies that come with being a physically separate part.

The concept is explained in "Mission Adaptive Compliant Wing – Design, Fabrication and Flight Test," a 2006 research paper, whose lead author, Sridhar Kota, is the founder of FlexSys. In this system, the strain placed upon the structure (here, a wing) is distributed through the whole of the structure, in a similar way to how multiple cables attached to a few towers support the whole of a suspension bridge. What is special about this system is that, unlike the cables on the bridge, the strands can deliberately be pulled and warped, thanks to the elastic nature of the connections and their distribution throughout the wing. Servos and actuators inside the wing pull the strings, and all of this is controlled and coordinated by a computer algorithm, which interprets the pilots commands and bends the wing accordingly.

The boring yet practical promise of this technology is more fuel-efficient airplanes, with projected fuel savings of up to 8 percent for airplanes with wings converted to FlexFoil, and in theory as high as 12 percent on airplanes designed with the technology in mind. What makes this way more interesting than just minor fuel savings is the new ways a flexible wing could be used, allowing more control options for wing edges than just extending or pivoting up and down.

In order for that new future to be realized, the wings will first have to complete testing. FlexSys intends to test the new wing technology on a converted Gulfstream jet in July of 2014.


    






The End Of Anonymity

$
0
0

The End Of Anonymlity
Joan Vicent Canto Roig/Getty Images

Click here to see how a useless photo turns into an identifiable face.

Detective Jim McClelland clicks a button and the grainy close-up of a suspect—bearded, expressionless, and looking away from the camera—disappears from his computer monitor. In place of the two-dimensional video still, a disembodied virtual head material­izes, rendered clearly in three dimensions. McClelland rotates and tilts the head until the suspect is facing forward and his eyes stare straight out from the screen.

It’s the face of a thief, a man who had been casually walking the aisles of a convenience store in suburban Philadelphia and shopping with a stolen credit card. Police tracked the illicit sale and pulled the image from the store’s surveillance camera. The first time McClelland ran it through facial-recognition software, the results were useless. Algorithms running on distant servers produced hundreds of candidates, drawn from the state’s catalog of known criminals. But none resembled the suspect’s profile closely enough to warrant further investigation. 

It wasn’t altogether surprising. Since 2007, when McClelland and the Cheltenham Township Police Department first gained access to Pennsylvania’s face-matching system, facial-recognition software has routinely failed to produce actionable results. While mug shots face forward, subjects photographed “in the wild,” whether on the street or from a ceiling-mounted surveillance camera, rarely look directly into the lens. The detective had grown accustomed to dead ends. 

But starting in 2012, the state overhauled the system and added pose-correction software, which gave McClelland and other trained officers the ability to turn a subject’s head to face the camera. While I watch over the detective’s shoulder, he finishes adjusting the thief’s face and resubmits the image. Rows of thumbnail mug shots fill the screen. McClelland points out the rank-one candidate—the image mathematically considered most similar to the one submitted.

It’s a match. The detective knows this for a fact because the suspect in question was arrested and convicted of credit card fraud last year. McClelland chose this demonstration to show me the power of new facial-recognition software, along with its potential: Armed with only a crappy screen grab, his suburban police department can now pluck a perpetrator from a combined database of 3.5 million faces.

This summer, the reach of facial-recognition software will grow further still. As part of its Next-Generation Identification (NGI) program, the FBI will roll out nationwide access to more than 16 million mug shots, and local and state police departments will contribute millions more. It’s the largest, most comprehensive database of its kind, and it will turn a relatively exclusive investigative tool into a broad capacity for law enforcement. Officers with no in-house face-matching software—the overwhelming majority—will be able to submit an image to the FBI’s servers in Clarksburg, West Virginia, where algorithms will return a ranked list of between 2 and 50 candidates.

The $1.2-billion NGI program already collects more than faces. Its repositories include fingerprints and palm prints; other biometric markers such as iris scans and vocal patterns may also be incorporated. But faces are different from most markers; they can be collected without consent or specialized equipment—any cameraphone will do the trick. And that makes them particularly ripe for abuse. If there’s any lesson to be drawn from the National Security Agency’s (NSA) PRISM scandal, in which the agency monitored millions of e-mail accounts for years, it’s that the line between protecting citizens and violating their privacy is easily blurred. 

So as the FBI prepares to expand NGI across the United States, the rational response is a question: Can facial recognition create a safer, more secure world with fewer cold cases, missing children, and more criminals behind bars? And can it do so without ending anonymity for all of us?

 

***

 

The FBI’s Identification Division has been collecting data on criminals since it formed in 1924, starting with the earliest-used biometric markers—fingerprints. Gathered piecemeal at first, on endless stacks of ink-stained index cards, the bureau now maintains some 135 million digitized prints. Early forensic experts had to work by eye, matching the unique whorls and arcs of prints lifted from crime scenes to those already on file. Once computers began automating fingerprint analysis in the 1980s, the potentially months-long process was reduced to hours. Experts now call most print matching a “lights-out” operation, a job that computer algorithms can grind through while humans head home for the night.

Fingerprints don’t grow mustaches, and DNA can’t throw on a pair of sunglasses. But faces can sprout hair and sag with time.

Matching algorithms soon evolved to make DNA testing, facial recognition, and other biometric analysis possible. And as it did with fingerprints, the FBI often led the collection of new biometric markers (establishing the first national DNA database, for example, in 1994). Confidence in DNA analysis, which involves comparing 13 different chromosomal locations, is extremely high—99.99 percent of all matches are correct. Finger­print analysis for anything short of a perfect print can be less certain. The FBI says there’s an 86 percent chance of correctly matching a latent print—the typically faint or partial impression left at a crime scene—to one in its database, assuming the owner’s print is on file. That does not mean that there’s a 14 percent chance of identifying the wrong person: Both DNA and fingerprint analysis are admissible in court because they produce so few false positives. 

Facial recognition, on the other hand, never identifies a subject—at best, it suggests prospects for further investigation. In part, that’s because faces are mutable. Fingerprints don’t grow mustaches, and DNA can’t throw on a pair of sunglasses. But faces can sprout hair and sag with time and circumstance. People can also resemble one another, either because they have similar features or because a low-resolution image tricks the algorithm into thinking they do.  

As a result of such limitations, no system, NGI included, serves up a single, confirmed candidate but rather a range of potential matches. Face-matching software nearly always produces some sort of answer, even if it’s completely wrong. Kevin Reid, the NGI program manager, estimates that a high-quality probe—the technical term for a submitted photo—will return a correct rank-one candidate about 80 percent of the time. But that accuracy rating is deceptive. It assumes the kind of image that officers like McClelland seldom have at their disposal. 

Candid Camera: Facial Analysis In The Wild
When a shopper enters Reebok’s flagship store in New York City, a face-detection system analyzes 10 to 20 frames per second to build a profile of the potential customer. The algorithms can determine a shopper’s gender and age range as well as behavioral and emotional cues, such as interest in a given display (it tracks glances and the amount of time spent standing in one place). Reebok installed the system, called Cara, in May 2013; other companies are following suit. Tesco recently unveiled a technology in the U.K. that triggers digital ads at gas stations tailored to the viewer’s age and gender. Face detection shouldn’t be confused with facial recognition. Cara extracts data from up to 25 faces at once, but it doesn’t record or match them against a database. “The images are destroyed within a fraction of a second,” says Jason Sosa, the CEO of New York–based IMRSV, which developed the software. Most businesses aren’t interested in collecting your face, just the demographic info etched into it.
Courtesy IMRSV

During my visit to the Cheltenham P.D., another detective stops by McClelland’s cubicle with a printout. “Can you use this?” he asks. McClelland barely glances at the video still—a mottled, expressionistic jumble of pixels—before shaking his head. “I’ve gotten to the point where I’ve used the system so much, I pretty much know whether I should try it or not,” he says. Of the dozens of photos funneled his way every week, he might run one or two. When he does get a solid hit, it’s rarely for an armed robbery or assault and never for a murder. 

Violent criminals tend to obscure their faces, and they don’t generally carry out their crimes in public. If a camera does happen to catch the action, McClelland says, “they’re running or walking fast, as opposed to somebody who’s just, la-di-da, shoplifting.” At this point, facial recognition is best suited to catching small-time crooks. When your credit card is stolen and used to buy gift cards and baby formula—both popular choices, McClelland says, because of their high resale value—matching software may come to the rescue. 

Improving the technology’s accuracy is, in some ways, out of the FBI’s hands. Law-enforcement agencies don’t build their own algorithms—they pay to use the proprietary code written by private companies, and they fund academics developing novel approaches. It’s up to the biometrics research community to turn facial recognition into a genuinely powerful tool, one worthy of the debate surrounding it.

 

***

 

In August 2011, riots broke out across London. What started as a protest of a fatal shooting by police quickly escalated, and for five days, arson and looting were rampant. In the immediate aftermath of the riots, the authorities deployed facial-recognition technology reportedly in development for the 2012 Summer Olympics. “There were 6,000 images taken of suspects,” says Elke Oberg, marketing manager at Cognitec, a German firm whose algorithms are used in systems worldwide. “Of those, one had an angle and quality good enough to run.”

Facial recognition can be thwarted by any number of factors, from the dirt caked on a camera lens to a baseball hat pulled low. But the technology’s biggest analytical challenges are generally summed up in a single acronym: APIER, or aging, pose, illumination, expression, and resolution. 

A forward-facing mug shot provides a two-dimensional map of a person’s facial features, enabling algorithms to measure and compare the unique combination of distances separating them. But the topography of the human face changes with age: The chin, jawline, and other landmarks that make up the borders of a specific likeness expand and contract. A shift in pose or expression also throws off those measurements: A tilt of the head can decrease the perceived distance between the eyes, while a smile can warp the mouth and alter the face’s overall shape. Finally, poor illumination and a camera with low resolution both tend to obscure facial features.

For the most part, biometrics researchers have responded to these challenges by training their software, running each algorithm through an endless series of searches. And companies such as Cognitec and NEC, based in Japan, teach programs to account for exceptions that arise from suboptimal image quality, severe angles, or other flaws. Those upgrades have made real progress. A decade ago, matching a subject to a five-year-old reference photo meant overcoming a 25 percent drop in accuracy, or 5 percent per year. Today, the accuracy loss is as low as 1 percent per year.

Computer scientists are now complement­ing those gains with companion software that mitigates the impact of bad photos and opens up a huge new pool of images. The best currently deployed example is the 3-D–pose correction software, called ForensicaGPS, that McClelland showed me in Pennsylvania. New Hampshire–based Animetrics released the software in 2012, and though the company won’t share exact numbers, it’s used by law-enforcement organizations globally, including the NYPD and Pennsylvania’s statewide network.

Before converting 2-D images to 3-D avatars, McClelland adjusts various crosshairs, tweaking their positions to better line up with the subject’s eyes, mouth, chin, and other features. Then the software creates a detailed mathematical model of a face, capturing data that standard 2-D–centric algorithms miss or ignore, such as the length and angle of the nose and cheekbones. Animetrics CEO Paul Schuepp says it “boosts anyone’s face-matching system by a huge percentage.” 

Social media has precisely what facial recognition needs: billions of high-quality, camera-facing head shots, many of them tied directly to identities.

Anil Jain, one of the world’s foremost biometrics experts, has been developing software at Michigan State University (MSU) that could eliminate the need for probe photos entirely. Called FaceSketchID, its most obvious function is to match forensic sketches—the kind generated by police sketch artists—to mug shots. It could also make inferior video footage usable, Jain says. “If you have poor quality frames or only a profile, you could make a sketch of the frontal view of the subject and then feed it into our system.”

A sketch artist, in other words, could create a portrait based not on an eyewitness account of a murder but on one or more grainy, off-angle, or partially obscured video stills of the alleged killer. Think of it as the hand-drawn equivalent of Hollywood-style image enhancement, which teases detail from a darkened pixelated face. A trained artist could perform image rehabilitation, recasting the face with the proper angle and lighting and accentuating distinctive features—close-set eyebrows or a hawkish, telltale nose. That drawing can then be used as a probe, and the automatic sketch ID algorithms will try to find photographs with corresponding features. Mirroring the artist’s attention to detail, the code focuses less on finding similar comprehensive facial maps and more on the standout features, digging up similar eyebrows or noses.

At press time, the system, which had been in development since 2011, had just been finished, and Jain expected it to be licensed within months. Another project he’s leading involves algorithms that can extract facial profiles from infrared video—the kind used by surveillance teams or high-end CCTV systems. Liquor-store bandits aren’t in danger of being caught on infrared video, but for more targeted operations, such as tracking suspected terrorists or watching for them at border crossings, Jain’s algorithms could mean the difference between capturing a high-profile target and simply recording another anonymous traveler. The FBI has helped support that research.

Individually, none of these systems will solve facial recognition’s analytical problems. Solutions tend to arrive trailing caveats and disclaimers. A technique called super-resolution, for example, can double the effective number of pixels in an image but only after comparing images snapped in extremely rapid succession. A new video analytics system from Animetrics, called Vinyl, automatically extracts faces from footage and sorts them into folders, turning a full day’s work by ananalyst into a 20-minute automated task. But analysts still have to submit those faces to matching algorithms one at a time. Other research, which stitches multiple frames of video into a more useful composite profile, requires access to massive computational resources.

But taken together, these various systems will dramatically improve the technology’s accuracy. Some biometrics experts liken facial recognition today to fingerprint analysis decades ago. It will be years before a set of standards is developed that will lift it to evidentiary status, if that happens at all. But as scattered breakthroughs contribute to better across-the-board matching performance, the prospect of true lights-out facial recognition draws nearer. Whether that’s a promise or a threat depends on whose faces are fair game.

 

***

 

The best shot that Detective McClelland runs during my visit, by far, was pulled from social media. For the purposes of facial recognition, it couldn’t be more perfect—a tight, dead-on close-up with bright, evenly distributed lighting. There’s no expression, either, which makes sense. He grabbed the image from the profile of a man who had allegedly threatened an acquaintance with a gun. 

This time, there’s no need for Animetrics’ 3-D wizardry. The photo goes in, and the system responds with first-, second-, and third-ranked candidates that all have the same identity (the images were taken during three separate arrests). The case involved a witness who didn’t have the suspect’s last name but was connected to him via social media. The profile didn’t provide a last name either, but with a previous offender as a strong match, the detective could start building his case.

The pictures we post of ourselves, in which we literally put our best face forward, are a face matcher’s dream. Animetrics says it can search effectively against an image with as few as 65 pixels between the eyes. In video-surveillance stills, eye-to-eye pixel counts of 10 or 20 are routine, but even low-resolution cameraphone photos contain millions of pixels. 

Social media, then, has precisely what facial recognition needs: billions of high-quality, camera-facing head shots, many of them directly tied to identities. Google and Facebook have already become incubators for the technology. In 2011, Google acquired PittPatt (short for Pittsburgh Pattern Recognition), a facial-recognition start-up spun out of Carnegie Mellon University. A year later, Facebook acquired Israel-based Face.com and redirected its facial-recognition work for internal applications. That meant shutting down Face.com’s KLIK app, which could scan digital photos and automatically tag them with the names of corresponding Facebook friends. Facebook later unveiled an almost identical feature, called Tag Suggestions. Privacy concerns have led the social network to shut down that feature throughout the European Union.

Google, meanwhile, has largely steered clear of controversy. Former CEO Eric Schmidt has publicly acknowledged that the company has the technical capability to provide facial-recognition searches. It chooses not to because of the obvious privacy risks. Google has also banned the development of face-matching apps for its Google Glass wearable computing hardware.

Facebook didn’t respond to interview requests for this story, and Google declined. But using images stored by the social media giants for facial recognition isn’t an imaginary threat. In 2011, shortly after PittPatt was absorbed by Google, Carnegie Mellon privacy economist Alessandro Acquisti demonstrated a proof-of-concept app that used PittPatt’s algorithms to identify subjects by matching them to Facebook images. Mining demographic information freely accessible online, Acquisti could even assign social security numbers to some.

 Deploying a national or global equivalent, which could match a probe against a trillion or more images (as opposed to a few hundred thousand, in Acquisti’s case), would require an immense amount of processing power—something within the realm of possibility for Silicon Valley’s reigning data companies but currently off the table. “That doesn’t mean it will not happen,” says Acquisti. “I think it’s inevitable, because computational power keeps getting better over time. And the accuracy of face recognition is getting better. And the availability of data keeps increasing.”

So that’s one nightmare scenario: Social media will intentionally betray us and call it a feature. Acquisti predicts it will happen within 20 years, at most. But there’s another, less distant path for accessing Facebook and Google’s billions of faces: Authorities could simply ask. “Anything collected by a corporation, that law enforcement knows they’ve collected, will eventually get subpoenaed for some purpose,” says Kevin Bowyer, a computer-science professor and biometrics and data-mining expert at the University of Notre Dame.

The question isn’t whether Facebook will turn over data to law enforcement. The company has a history of providing access to specific accounts in order to assist in active investigations. It has also been sucked into the vortex of the NSA’s PRISM program, forced along with other companies to allow the widespread monitoring of its users’ data. “What we’ve seen with NSA surveillance and how the FBI gets access to records is that a lot of it comes from private companies,” says Jennifer Lynch, senior staff attorney at the Electronic Frontier Foundation, a nonprofit digital-rights group. “The data, the pictures, it becomes a honeypot for the government.”

The FBI, it should be noted, is not the NSA. Tempting as it is to assign guilt by association or in advance, there’s no record of overreach or abuse of biometrics data by the agency. If NGI’s facial database exists as the FBI has repeatedly described it, as a literal rogue’s gallery consisting solely of mug shots, it carries relatively few privacy risks.

And yet “feature creep”—or the stealthy, unannounced manner in which facial-recognition systems incorporate new data—has already occurred. When I asked McClelland whether he could search against DMV photos, I thought it would be a throwaway question. Drivers are not criminals. McClelland looked at me squarely. “In Pennsylvania? Yes.” 

Police have had access to DMV photos for years; they could search the database by name, location, or other parameters. But in mid-2013, they gained the ability to search using other images. Now, every time they run a probe it’s also checked against the more than 30 million license and ID photos in the Pennsylvania Department of Transportation’s (PennDOT) database. McClelland tells me he doesn’t get many hits, and the reason for that is the system’s underlying algorithm. PennDOT’s priority isn’t to track criminals but to prevent the creation of duplicate IDs. Its system effectively ignores probes that aren’t of the perfect forward-facing variety captured in a DMV office. As a result, one of the most potentially unsettling examples of widespread face collection, comprising the majority of adults in Pennsylvania—those who didn’t surrender their privacy by committing a crime but simply applied for state-issued identification—is hobbled by a simple choice of math. “I kind of like it,” says McClelland. “You’re not getting a lot of false responses.”

The benefit those PennDOT photos could have on criminal investigations is anyone’s guess. But this much is certain: Attempts are being made, thousands of times a year, to cross the exact line that the FBI promises it won’t—from searching mug shots to searching everyone.

 

***

 

It would be easy to end there, on the cusp of facial recognition’s many potential nightmare scenarios. The inexorable merger of reliable pattern analysis with a comprehensive pool of data can play out with all the slithering dread of a cyber thriller. Or, just maybe, it can save the day.

Last May, a month after the Boston Marathon bombings killed three people and injured more than 250, MSU’s Anil Jain published a study showing what could have been. Jain ran the faces of both suspects through NEC’s NeoFace algorithm using surveillance images collected at the scene of the detonations. Although the older Tsarnaev brother, Tamerlan, actually had a mug shot on file, as a result of a 2009 arrest for assault and battery, it failed to show up within the top 200 candidates. He was wearing sunglasses and a cap, and the algorithm was unable to match it to his booking photo.

Dzhokhar Tsarnaev was another matter. Jain placed a photo of the younger brother, taken on his graduation day, in a million-image data set composed primarily of mug shots. In a blind search—meaning no demographic data, such as age and gender, narrowed the list of potential candidates—NeoFace paired a shot from the bombing with Tsarnaev’s graduation-day photo as a rank-one result. Facial recognition would have produced the best, and essentially only, lead in the investigation. 

There’s a catch. The reference photo was originally posted on Facebook. In order to have pulled off that match, law-enforcement officers would have needed unprecedented access to the social network’s database of faces, spread throughout a trillion or more images. More than three days after the bombings, the Tsarnaevs brought the investigation to a close by murdering an MIT police officer and fighting a running gun battle through the streets of Cambridge and nearby Watertown. Time for facial analysis would have been short, and the technical hurdles might have been insurmountable.

Still, it could have worked.

Maybe it’s too early to debate the boundaries of facial recognition. After all, its biggest victories and worst violations are yet to come. Or maybe it’s simply too difficult, since it means wrestling with ugly calculations, such as weighing the cost of collective privacy against the price of a single life. But perhaps this is precisely when and why transformative technology should be debated—before it’s too late. 

Four Ways Your Body Can Betray You

1) Finger/Palm: The latent prints collected at crime scenes include finger and palm prints. Both can identify an individual, but latent impressions tend to be smudged and incomplete. Last April, the FBI revolutionized print analysis, rolling out the first national palm-print database and updating algorithms to triple the accuracy of fingerprint searches.

2) DNA: Matching a suspect’s DNA to a crime-scene sample used to mean waiting up to 60 days for lab results. IntegenX recently released RapidHIT technology, which compares DNA in 90 minutes—fast enough to nail a suspect during interrogation. The two-by-two-foot scanner packs a lab’s worth of chem­ical analyses onto a single disposable microfluidic cartridge.

3) Iris: An iris scan requires suspects to stare directly into a nearby camera, making it all but useless for criminal investigations. But it’s a foolproof approach to authentication, and nearly any consumer-grade camera can capture the unique patterns in the eye. Schools, prisons, and companies (including Google) already use iris scans for security. 

4) Voice: Though voice recognition is largely a commercial tool—banks such as Barclays use it to verify money transfers—vocal-pattern matching also catches crooks. Within 30 seconds of phone conversation, the system created by Nuance Communications can build a unique voice print and then run it against a database of prints from confirmed fraudsters.

The Five Biggest Challenges To Facial Recognition

1) Age: Years take a toll on the face. The more time that has passed between two photos of the same subject, the more likely the jawline will have changed or the nose bloomed. Any number of other features can also lose their tell-tale similarities with age. 

2) Pose: Most matching algorithms compare the distance between various features—the space separating the eyes, for example. But a subject turned away from the camera can appear to have wildly different relative measurements.

3) Illumination: Dim lighting, heavy shadows, or even excessive brightness can have the same adverse effect, robbing algorithms of the visual detail needed to spot and compare multiple features.

4) Expression: Whether it’s an open-mouthed yell, a grin, or a pressed-lip menace, if a subject’s expression doesn’t match the one in a reference shot, key landmarks (such as mouth size and position) may not line up.

5) Resolution: Most facial-recognition algorithms are only as good as the number of pixels in a photograph. That can be a function of everything from camera quality to the subject’s distance from the lens (which dictates how much zooming is needed to isolate the face).

 

 

This article originally appeared in the 2014 issue of Popular Science.


    






Glowing Plants Now Up For Auction

$
0
0

In Light / In Dark
Bioglow

St. Louis-based biotech company Bioglow has put 20 of its light-emitting plants up for auction. First off, here's where you can bid. The plant, called Starlight Avatar, glows blue-green throughout its life cycle, but lives only for two to three months. 

To create the plants, Bioglow added genes from bioluminescent marine bacteria to the chloroplast genome of an ornamental species of tobacco called Nicotiana alata. Back in 2010, Bioglow scientists published a description of this process, which has been approved by the U.S. Department of Agriculture, in the journal PLOS ONE; you can read it here.

Don't try to grow the Starlight Avatar in your backyard — it's a sensitive plant that's meant to be kept indoors. According to Bioglow, the light-emitting pathway can't be transferred by pollen to other plant populations. (Phew? Darn?)

In the future, Bioglow wants to engineer plants with petals that glow one color and leaves another. Beyond offering glow-in-the-dark greenery for your windowsill, Bioglow has some ambitious goals for its autoluminescent plants. The company hopes to delve into sustainable lighting, such as marking the sides of driveways and highways with the sprouts in order to reduce dependency on electricity and fossil fuels.


    






Researchers Create Micro-Flyer That "Swims" in Air

$
0
0

Lief Ristroph and Stephen Childress of New York University have taken an alternate route around evolution's highway: They've created a miniature flying machine that moves not like a bird or insect, but a jellyfish. Rather than flapping its four wings up and down, this micro-ornithopter's wings use a swimming motion, opening and closing its wings in order to ascend, descend, and hover.

Flying machines that mimic the flight of animals like bees or hummingbirds tend to flip upside-down, unless the build also includes “aerodynamic dampeners” like tails or sails; or a continuous control system for wing speed and motion. This flier's inward-outward wing motion allowed it to maintain stability without those add-ons. 

The flying machine was announced last fall, and its design and mathematics are fully explored this week in the Journal of The Royal Society

The jellyfish flyer is just 10 centimeters long, with each wing 8 centimeters long. A tether wired to an external power source supplies energy, rather than an on-board battery. The entire 'thopter weighs 2.1 grams, including a 1.1 gram motor. To acheive stability along different flight paths, the researchers experimented with wing trim, varied flapping patterns among the four wings, and different voltages.

This flyer is a prototype, emphasize the researchers, an early step toward practical micro-aircraft. "In the future," they write, "small-scale flapping-wing aircraft may be used in applications ranging from surveillance and reconnaissance missions to traffic and air quality monitoring."

Illustration from "Stable hovering of a jellyfish-life flying machine" paper
Design for a flying machine:
Researchers at New York University have created a small, stable four-winged flyer with wings that move inward and outward, similar to a jellyfish, rather than flapping up and down like a bird or insect. The top figures illustrate the body and wing design; the bottom figures detail the motor assembly (left), and the wingspar assembly (right).
Leif Ristroph and Stephen Childress

 


    






This Collapsible, Camera-Toting Drone Surely Couldn't Spy On People

$
0
0

 

 

The Pocket Drone is a tiny, tiny drone that can quickly collapse and be concealed, but is also strong enough to carry a high-quality camera. What might people use this for? Oh, you know, recreation. Filming footage of bike rides or aerial shots of birthday parties. Not spying. Ha! Definitely not spying.

Kidding! (Kinda.) The Pocket Drone is actually a neat little gizmo, which explains why it has so absurdly rocketed past its Kickstarter goal, now at more than $200,000 of an asked-for $35,000. The 1-pound drone can fold its propellers into about the space of a 7-inch tablet. Despite its small size, it can carry a sports camera like a GoPro. Pocket Drone costs less than $500 (for Kickstarter backers). That's not a half-bad bundle of features for wannabe intelligence agencies sports photography enthusiasts.

[Kickstarter]

 


    






Viewing all 22782 articles
Browse latest View live