Quantcast
Channel: Popular Science | RSS
Viewing all 20161 articles
Browse latest View live

Want MLB Players To Suck Less And Stay In The League Longer? Give Them Naps

$
0
0
Jeremy Hermida

Ron Reiring

Or some kind of downtime, at least. Two studies find negative effects for MLB players when they get worn out.

The slog of the baseball season isn't easy on a player, unless he's some kind of robot. Turns out the longer teams play through the MLB's 162 games, the worse they do.

A new study tracked strike-zone judgement from September through April of last year's season, seeing how often players were willing to swing at pitches they should've let go. Out of 30 teams, 24 were worse by April, and averaged for all teams, performance was considerably worse overall. Data from the prior six seasons showed a similar downward trend, suggesting that the 2012 data wasn't a fluke.

So what was it? Fatigue, the researchers theorize: fewer days off over the course of a season along with the strain of travel. Practice throughout the season might've helped players improve, but it was outweighed by the stress of playing so many games, the researchers say.

Another recent study charted not just one season but entire careers against sleepiness. Lo and behold, tiredness correlated with a shorter career in the big leagues.

Researchers had major leaguers self-report based on the Epworth Sleepiness Scale, a (very short) survey designed to test sleepiness during the day. (What are your chances of dozing when "sitting quietly after a lunch without alcohol"?) The survey was taken with a random sample of 80 MLB players before the 2010 season started. The players with the highest scores were the least likely to be in the league three seasons later--they were either demoted to a lower league, not signed again, or for some other reason weren't playing. A relatively small score of a 5 on the scale meant the players had a 72 percent chance of still being around when researchers checked back after three seasons. But the players who scored 15? Their odds of making it fell to 14 percent.

That's quite a trend, even if it's always good to have some skepticism toward studies based on self-reported surveys. (Are those players definitely sleepier than others? Or are they just more likely to say they are?) Also worth mentioning the old correlation versus causation rule: were the sleepier players kicked out because they were sleepy, or was there some kind of related problem happening?

Both teams of researchers suggest downtime or sleepiness screening could help offset the problems. Good for owners, and definitely good for players. Now let's extend naps into all careers.

    



Does The Tesla Model S Electric Car Pollute More Than An SUV?

$
0
0
Does the supposedly clean, green Tesla Model S really pollute more than a gas-guzzling Jeep Grand Cherokee sport-utility vehicle?

That's what one analyst has claimed.

In an exhaustive 6,500-word article on the financial website Seeking Alpha, analyst Nathan Weiss lays out a case that the Model S actually has higher effective emissions than most large SUVs of both the greenhouse gas carbon dioxide and smog-producing pollutants like sulfur dioxide.

As a 2013 Tesla Model S owner, I was shocked and concerned by his claims.

Although carbon emissions were not a big factor in my decision to buy a plug-in car--I was more interested in performance, style, and low operating cost--the car's green cred was a nice bonus.

Now here's this Weiss guy, calling me a global-warming villain.

But I couldn't help but notice that in his role as financial analyst, Weiss had been advising his clients to "short" the stock of Tesla Motors [NSDQ:TSLA]--to bet against it. (Tesla stock price down = happy clients; Tesla stock up = very unhappy clients.)

And is it a coincidence that the article appeared the same day Tesla stock skyrocketed 30 percent, after Tesla's first-quarter earning report? (It's since risen another 30 percent.)

Weiss's motives aside, his claims deserve a close look on their merits.

Not only the tailpipe

Like all 100-percent electric cars, the Model S indisputably has zero tailpipe emissions.

But Weiss looks at emissions from the powerplants that supply the Tesla's electric "fuel," as well as the excess electricity consumed by the Model S due to charging inefficiencies and "vampire" losses.

These two factors, he concludes, give the Model S effective carbon emissions roughly equal to those of a Honda Accord.

Throw in the carbon emitted during production of the Model S's 85-kWh lithium-ion battery, says Weiss, and the Model S ends up in Ford Expedition territory.

Not so fast....

Although Weiss makes a number of valid points, I see several flaws in his argument. And he bases his carbon-footprint estimates of battery production on a single report that is far out of sync with previous research on the subject.

Furthermore, he fails to account for the carbon emissions resulting from the production of gasoline. If the carbon footprint of a Tesla's fuel counts against it, why shouldn't a standard car's fuel be subject to similar accounting?

So let's go through his analysis and his conclusions point by point.

*Power plant emissions count against electric cars

Virtually all electric car advocate agree that when toting up the environmental pros and cons of electric cars, it's only fair to include powerplant emissions.

When this has been done previously, the numbers have still favored electric cars. The Union of Concerned Scientists, for example, concluded in a 2012 report, "Electric vehicles charged on the power grid have lower global warming emissions than the average gasoline-based vehicle sold today."

The carbon-friendliness of the electric grid, of course, varies wildly from region to region, depending upon the type of powerplants there.

2013 Tesla Model S

Tesla Motors has an interactive calculator on its website that allows you to calculate the effective carbon emissions of your Model S, depending on your particular state's powerplant mix (coal, gas, nuclear, hydro, etc.). The numbers range from 26 gm/mi in Idaho (mostly hydro) to 310 gm/mi in West Virginia (mostly coal).

According to Weiss, the national average for Tesla's claimed Model S CO2 emissions works out to 163 gm/mile. Tesla says the corresponding figure for gas cars is 400 gm/mi.

Although not truly zero-emission, electric cars in general (and the Model S in particular) are still better than most gas cars. Or so goes the mainstream scientific thinking.

Weiss begs to differ.

*Tesla's numbers are too optimistic

According to the Tesla website, it assumes a Model S electricity usage of 283 Watt-hours per mile for its CO2 calculations. That's the power required to drive at a steady 55 mph.

Weiss disputes that number as unrealistically low. He cites, among other sources, the EPA's number of 321 Wh/mi, as well as 48 reports on the Tesla owners' forum that averaged to 367 Wh/mi.

He concludes that the real-world power consumption of the 85-kWh Model S is actually more like 375 Wh/mi. That's 33 percent higher than Tesla claims.

Accordingly, CO2 emissions would also be 33 percent higher.

I can't argue with Weiss on this one. In 3,000 miles of driving my 60-kWh Model S, I've averaged 343 Wh/mi. Since my 60-kWh car is about 7 percent more efficient than the heavier 85-kWh model, that would correspond to a real-world consumption of 367 Wh/mi for the longer-range car.

Because my driving--as well as that of the 48 Tesla owners Weiss cites--has occurred mostly in winter, I would expect average energy usage to decline as the weather warms. (I've already seen my efficiency improve in May.) I'd guesstimate a real-world year-round number for the 85-kWh Model S of 340 Wh/mi.

But I won't quibble with Weiss's figure of 375.

So a 33-percent bump raises Tesla's claimed Model S effective carbon emissions of 163 gm/mi to 216 gm/mi, or about the same as the Toyota Prius V.

*Charging losses boost carbon emissions by 18 percent

Not every kilowatt-hour of energy that comes out of the wall plug ends up in the Model S battery. Citing EPA figures and reports from owners, Weiss estimates the Model S's real-world charging efficiency at about 85 percent.

Again, Weiss has a good point. I've measured charging losses of 10-15 percent in my own car. Tesla quotes a "peak charging efficiency" of 92 percent on its website. An average charging efficiency of 85 percent seems plausible.

That means a Model S typically draws 17 percent more power from the plug than it uses to power the car.

So now our Model S carbon emissions are up to 254 gm/mi, slightly less than those of a 2013 Honda Civic.

*Vampire losses further raise emissions by 55 percent

Whoa! This is truly a shocking claim. It implies that vampire losses--the power used by the Model S when it's off, just sitting there in your garage--amount to nearly as much as Tesla claims the car uses while driving.

Weiss, citing a number of sources, (including my own report on Model S vampire losses on this site), settles on a number for vampire losses of 5.1 kWh per day. He then combines that figure with an estimate of 7,728 miles driven per year to conclude that vampire-related Model S CO2 emissions amount to 140 gm/mi.

This brings his new total up to to 394 gm/mile, about the same as a BMW 5-Series.

I'd call Weiss's number for vampire drain a bit high, but not implausible. I measured at-the-wall vampire losses averaging 4.5 kWh per day on my car.

One reason for Weiss's high-ball estimate may be his apparent misunderstanding of the Model S battery thermal management system. He claims that vampire losses in the 30-to-50-degree range are nearly triple those occurring at temperaturess of 50 to 80 degrees, due to the extra juice required to keep the battery warm.

This is simply wrong. I have noticed no such variations.

And a Tesla rep confirmed to me that the Model S battery is not temperature-controlled when the car sits idle, so there is no battery heating/cooling power draw. (Elon Musk has publicly confirmed this.) The brief pre-heat/cool prior to the once-a-day "topping off" charge cycle would have only a minimal impact on vampire losses.

I also take issue with Weiss's estimate of the Model S average yearly driving distance of only 7,728 miles. (His derivation of the number is too lengthy to analyze here.)

2013 Tesla Model S and 2011 Chevy Volt

How could it be that Model S owners drive barely half as much as the national average of 13,476 miles per year? l know my own driving mileage has actually increased since I got my Model S, simply because the car is such a blast to drive.

It's only temporary

But Weiss's major miscue in the section about vampire power drain--other than misspelling my name--is his implication that these daily losses are a permanent long-term condition.

Tesla has in fact been working on "sleep mode" software improvements to reduce vampire losses. Its next major update, due this summer, is expected to cut vampire losses by half.

By the end of the year, they will be virtually eliminated, according to Tesla spokesperson Shanna Hendricks.

Weiss acknowledges the promised sleep mode, but doubts that it will make any difference. "History (and the mechanics of the battery) suggest it will not meaningfully reduce idle power consumption," he writes.

I suggest it will. And that by the end of the year, 55 percent of Weiss's argument will have gone up in smoke.

Anticipating the new sleep mode, I'm going to ignore vampire losses and stick with 254 gm/mi as the Model S carbon footprint, compared to Weiss's vampire-bloated number of 394 gm/mi.

*Battery production adds 39 percent more

The manufacture of a car contributes to its lifetime carbon emissions. And it's well established that the manufacture of lithium-ion batteries is a carbon-intensive process. The question is, how much?

For his battery-production carbon numbers, Weiss relies primarily on an outlier study from the Journal of Industrial Ecology. Its estimates of carbon footprint from lithium-ion battery production are far higher than previous studies, and it has been pilloried in the blogosphere for numerous errors too arcane to enumerate here.

A 2010 study in the journal of the American Chemical Society, on the other hand, concludes that the environmental impact of the battery is "relatively small." It estimates that battery production adds about 15 percent to the driving emissions of an electric car.

A 2012 study for the California Air Resources Board puts the number at 26 percent, assuming the California powerplant mix. But if you adjust to the dirtier national U.S. grid powerplant mix, driving emissions go up. So the percentage share of battery production goes down, also to about 15 percent.

Tesla may, in fact, beat even those lower numbers. Uniquely among electric car manufacturers, Tesla uses what are arguably the most efficiently manufactured lithium-ion battery cells on the planet: "commodity" 18650 laptop cells, which Panasonic churns out by the billions in highly automated plants. (I'm unaware of any carbon life-cycle analysis for these batteries.)

We'll go with the consensus mainstream number of 15 percent, which brings total Model S carbon emissions up to 292 gm/mi, against Weiss's battery-boosted grand total of 547 gm/mi.

Carbon summary

We've arrived at a number for the real-world effective CO2 emissions of a Model S of 292 g/mi. Admittedly, that's lot higher than Tesla claims on its website

But worse than a Grand Cherokee? Hardly.

The V-6 Grand Cherokee's official EPA CO2 number is 479 g/mi when fitted with the smallest engine offered, a 3.6-liter V-6. The more powerful V-8 model logs in at a whopping 592 g/mi.

Oops...

In a follow-up post a few days later, Weiss backed off and significantly downgraded his estimate for Model S carbon emissions.

He concedes that, in calculating vampire losses per mile, total distance of 12,000 miles per year makes for a better comparison. He also downgrades his estimate of idle power losses to 3.5 kWh per day.

And, strangely, he neglects to account for the carbon footprint of battery production in any way.

2013 Tesla Model S interior

With these new numbers, he recalculates the Tesla's total effective carbon emissions to be 346 g/mi, not a lot more than the 292 g/mi I calculated above.

Weiss also downgrades his SUV bogeyman, pointing out that even at his revised lower figure of 346 g/mi, the Model S is still a worse carbon polluter than the Toyota Highlander, which the EPA rates at 312 g/mi.

What about carbon from gasoline production?

But for all his zeal in exhaustively parsing the carbon footprint of electricity production, Weiss conveniently forgets to mention that producing gasoline also has its own carbon footprint.

According to a 2000 report from the MIT Energy Lab, gasoline production accounts for 19 percent of the total lifetime CO2 emissions of a typical car. Actually driving the car accounts for about 75 percent of its lifetime carbon output.

Thus the carbon footprint of fuel production adds about 25 percent to a gas car's nominal CO2 emissions number.

Sorry, Mr. Weiss. If you apply the same rules to gasoline cars that you did to the Tesla, your Toyota Highlander just went from 312 g/mi to 390 g/mi.

On this adjusted apples-to-apples basis, the Tesla figure of 292 g/mi is roughly comparable to that of the Scion iQ.

*Other pollutants

With all the growing concern about global warming and carbon emissions, old-fashioned "smog" air pollution--primarily nitrogen oxides (NOx) and sulfur dioxide (SO2)--has receded into the background.

Due to strict emissions laws, modern gasoline cars emit very little of these lung-threatening pollutants. The same cannot be said, unfortunately, about coal-fired powerplants.

Weiss calculates that powerplant emissions give the Model S an effective level of NOx pollution about triple that of the EPA limit for gas cars. (I'm discounting his suspect inclusion of vampire losses.)

The situation for sulfur dioxide is much worse. Weiss calculates that effective Model S sulfur dioxide emissions equal that of about 400 gas cars. (Again, the suspect vampire data is discounted.)

Weiss writes, "In many states, including California, if a smog-testing center could measure the effective emissions of a Tesla Model S through a tailpipe, the owner would face fines, penalties, or the sale of the vehicle under state 'clunker buyback' programs."

In terms of sulfur dioxide, gas cars are so clean and coal-fired electricity so dirty that a 60-watt light bulb effectively emits as much sulfur dioxide as an average gasoline car driving at 60 mph.

Frankly, I can't argue with these disturbing numbers, and I have not seen them refuted anywhere. But they say more about the tough emission laws for gas cars and the remarkably lax rules for coal-fired powerplants belching sulfur dioxide than they do about the Model S.

Nevertheless, I'm feeling a bit guilty about the sulfur dioxide spewing out of my Tesla's virtual tailpipe.

At least I live in New York state, which uses coal for only about 10 percent of its power production. That's about one quarter of the U.S nationwide percentage, so presumably I'm "only" 100 times worse than a gas car when it comes to sulfur dioxide emissions.

Fortunately, I'm not alone; the vast majority of electric cars operate in states with low-coal grids like California, Washington, and New York.

And the grid is slowly getting cleaner. As more wind, solar, and natural gas come online and antiquated coal plants are shut down, my effective SO2 emissions will steadily decline.

So in the end...

After all of this, the conclusion seemed clear: I drive a kick-ass, high-performance, five-seat all-electric luxury sport sedan that has the same wells-to-wheels carbon emissions as a tiny Scion minicar with two real seats.

Anybody got a problem with that?

When it comes to virtual tailpipe emissions, carbon and otherwise, the Model S ain't perfect.

But if you ask me, it's a huge step in the right direction.

This article, written by David Noland, was originally published on Green Car Reports, a publishing partner of Popular Science. Follow GreenCarReports on Facebook and Twitter.

    


How People Die On Mount Everest [Infographic]

$
0
0
Avalanches, exposure, heart attacks, and the other tragic ways Everest has claimed climbers' lives.

Last week, 80-year-old Yuichiro Miura became the oldest person to climb Mount Everest. Amazing, but other people weren't so lucky. As of 2012, an estimated 235 people have died on the mountain.

This infographic, from designer Ed Hernandez, breaks down what happened to them. Some of the causes of death are what you might expect--an avalanche or unexpected fall are the top two causes--but others are more surprising: altitude sickness accounted for nine deaths, while cerebral oedemas--fluid buildup in the brain that happens at high altitudes--killed five people.

Most of the climbers died while climbing back down the mountain, usually in the "Death Zone," which is 8,000 meters up (the mountain itself is 8,850 meters tall) and lethally combines blistering cold, low oxygen, and high elevation.

The infographic calls Everest "the highest graveyard on planet earth." No kidding.

Deaths on Mount Everest

[visual.ly]

    


Happening Now: An Asteroid And Its Moon Sail Past Earth

$
0
0
First Radar Images of 1998 QE2

NASA/JPL-Caltech/GSSR

The pair will come their closest at 4:59 pm ET today.

An asteroid is sailing past Earth today, and it's bringing along some luggage.

At 4:59 pm ET, asteroid 1998 QE2 will be a mere 3.6 million miles away from our home planet. And it's not alone. It is bringing with it a moon that orbits it as it flies along its own path.

Okay, "mere" is an exaggeration. It will come to about a distance that's 15 times the distance between the Earth and the moon. The asteroid is not a threat to Earth, NASA says.

Astronomers first discovered 1998 QE2 in 1998, hence its name. They only discovered that it had an orbiting moon on May 29, however, when radar images showed the moon as a smudge by its side. Among asteroids near Earth, only 16 percent that are 200 meters (655 feet) in diameter or larger are made up of two or three pieces that travel in tandem, called binary or triple asteroids. The new radar images suggest 1998 QE2 is 1.7 miles in diameter. Its moon is 2,000 feet wide.

The pair will not get this close to Earth again for another 200 years.

Over the next week, astronomers using the Deep Space Network antenna in Goldstone, California, and the Arecibo Observatory in Puerto Rico will observe 1998 QE2 as it comes by. Radar helps astronomers discern asteroids' shapes, rotation, surface features and orbits. Such observations are a part of NASA's larger effort to learn about asteroids so it can predict when any do threaten Earth--although, of course, we're not always the greatest at that quite yet.

[NASA]

    


UN Expert Worries About Killer Robots, Ignores The Ones That Already Exist

$
0
0
Fictional F/A-37

A fictional autonomous robot fighter plane from the 2005 movie "Stealth."

Wikimedia Commons

Autonomous war robots are coming. Panicking about them will only make things worse.

Yesterday, a United Nations expert called for a halt and moratorium on developing "lethal autonomous robotics," or, in layman's terms, "killer robots."

His argument: once killer robots take part in war, there will be no going back. Christof Heyns, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions, told the Human Rights Council that now is the time to regulate and stop killer robots, arguing that "decisions over life and death in armed conflict may require compassion and intuition." He also urged the council to form a panel that would study whether international laws in place today adequately address the use of killer robots.

Thing is, killer robots already exist. And they're about the least compassionate machines we could imagine.

I'm talking about land mines, those notorious explosives that explode when walked over. Land mines are programmed to kill when certain conditions are met. That is the same principle guiding a killer robot.

But there are some key differences: A killer robot might make a decision based on algorithms and inputs, internal coding and pre-programmed combat behaviors. It might be programmed to understand the laws of war, and it might use surveillance technologies to make distinctions between unarmed civilians and armed combatants. The same principles that power facial recognition software could apply to robots targeting their weapons at other weapons, so they fire to disable guns and not to kill people.

Land mines, on the other hand, fail to distinguish between civilians and soldiers, between soldiers of different nations, and between animals or large children or small soldiers. Land-mine triggers cannot be easily shut off and are designed for durability not intelligence. At their worst, killer robots could be as deadly and as indiscriminate as landmines. Chances are, though, they will be much more sophisticated.

The task before lawmakers is not to ban a technology out of fear but to adapt the law to the technology once it exists. Making legislative decisions about new technology is tricky business. In the United States, electronic communication is governed by a law passed well before email was a regular fixture of life. Provisions that made sense to congressmen in 1986 trying to imagine email led to great weaknesses in privacy and personal security, all because the technology wasn't understood when the law was written. The stakes are much lower in governing electronic communications than in authorizing robots to kill.

Killer robots are coming. Efforts to halt their introduction or ban their development are not only likely to fail, but they'll drown out legitimate concerns about the safest way to implement the technology with Luddite fear-mongering.

    


The Week In Numbers: The Fastest Human, Nerf's Longest-Range Gun, And More

$
0
0
Countdown to Soyuz rocket launch, May 28, 2013

NASA/Bill Ingalls

5 hours and 39 minutes: the time, from launch to docking, it took this week's Soyuz crew to get from Earth to the International Space Station. A new world record!

9.48 seconds: the limit to how fast a natural human could possibly run the 100-meter race, according to a Stanford biologist. That's 0.10 seconds faster than Usain Bolt's current world record.

8 percent: the portion of humans that still have the bendy, chimp-like feet of our tree-dwelling ancestors (yes, 1 in 13 people!)

1,100 degrees Fahrenheit: the heat produced by the Beam Down Tower in Masdar City, a fascinating experiment in concentrated solar power

$1 million: the money asteroid mining company Planetary Resources hopes to raise on Kickstarter to build an orbiting space telescope that ordinary people can control from Earth

100 feet: the firing range of Nerf's newest gun, which shoots MEGA darts nearly 55 mph

27 hours: the time this Australian police department spent making a 3-D printed pistol. The plastic gun exploded.

1953: the year a botched lobotomy left Henry Molaison unable to form new memories. His personal tragedy would eventually revolutionize neuroscience.

71 percent: the accuracy with which this beer-pouring robot can predict the future, three seconds in advance

$979: the price of the first office chair engineered to support the slouched backs and roving elbows of smartphone and tablet users

$5 million: the cost to produce a floating generator that harnesses energy from ocean currents

30,000: the number of homes this awesome wave farm will be able to power

    


Where Is The Next Carl Sagan?

$
0
0
Subjective Measures

Ryan Snook

Before people will understand science, scientists must understand people.

In 1954, a study published by Princeton and Dartmouth researchers asked their students to watch a recording of a football game between the two schools and count infractions. The Princeton students reported twice as many violations against Princeton as Dartmouth students did. In a 2003 study, Yale researchers asked people to evaluate proposed (fictional) policies about welfare reform, with political parties' endorsements clearly stated. They found that their subjects sided with their political parties regardless of their personal ideologies or the policies' content. A study by a different group in 2011 asked people to identify whether certain scientists (highly trained and at well-respected institutions) were credible experts on global warming, disposal of nuclear waste, and gun control. Subjects largely favored the scientists whose conclusions matched their own values; the facts were irrelevant.

People distort facts by putting them through a personal lens.This behavior is called "selective perception"-the way that otherwise rational people distort facts by putting them through a personal lens of social influence and wind up with a worldview that often alters reality. Selective perception affects all our beliefs, and it's a major stumbling block for science communication.

What divides us, it turns out, isn't the issues. It's the social and political contexts that color how we see the issues. Take nuclear power, for example. In the U.S., we argue about it; in France, the public couldn't care less. (The U.S.'s power is about 20 percent nuclear; France's is 78 percent.) Look at nearly any science issue and nations hold different opinions. We fight about gun control, climate change, and HPV vaccination. In Europe, these controversies don't hold a candle to debates about GMO foods and mad cow disease. Scientific subjects become politically polarized because the public interprets even the most rigorously assembled facts based on the beliefs of their social groups, says Dan Kahan, a Yale professor of law and psychology who ran the 2011 science-expert study.

The problem is, our beliefs influence policy. Public attitudes change how politicians vote, the products companies make, and how science gets funded.

So what can we do? The science world has taken note. For example, the National Science Foundation recently emphasized grant-proposal rules that encourage scientists to share their research with the public. And several conferences on science communication have sprung up. It's not a bad start. As people hear more from scientists, scientists will be absorbed into the public's social lens---and maybe even gain public trust. Having scientists tweet is good, but the most influential public figures are the ones folks can relate to (à la Carl Sagan). We need to get more figures like him-fast. According to Kahan, synthetic biology is a prime candidate for the next controversy. Building man-made versions of DNA or engineering better humans can be risky, and the public will need to make decisions about it. To ensure that those decisions are clear-eyed, scientists need to stop communicating as, well, scientists and speak like the rest of us.

    


Building A Better Bomb Detector

$
0
0
Artificial Nose?

Istockphoto.com & Iñaki Antoñana Plaza/Getty Images

Dogs are the best bomb detectors we have. Can scientists do better?

It's Christmas season at the Quintard Mall, in Oxford, Alabama, and were it not a weekday morning, the tiled halls would be thronged with shoppers, and I'd probably feel much weirder walking past Victoria's Secret with TNT in my pants. The explosive is harmless in its current form-powdered and sealed inside a pair of four-ounce nylon pouches tucked into the back pockets of my jeans-but it's volatile enough to do its job, which is to attract the interest of a homeland defender in training by the name of Suge.

Suge is an adolescent black Labrador retriever in an orange DO NOT PET vest. He is currently a pupil at Auburn University's Canine Detection Research Institute and comes to the mall once a week to practice for his future job: protecting America from terrorists by sniffing the air with extreme prejudice.

Olfaction is a canine's primary sense. It is to him what vision is to a human, the chief input for data. For more than a year, the trainers at Auburn have honed that sense in Suge to detect something very explicit and menacing: molecules that indicate the presence of an explosive, such as the one I'm carrying.

The TNT powder has no discernible scent to me, but to Suge it has a very distinct chemical signature. He can detect that signature almost instantly, even in an environment crowded with thousands of other scents. Auburn has been turning out the world's most highly tuned detection dogs for nearly 15 years, but Suge is part of the school's newest and most elite program. He is a Vapor Wake dog, trained to operate in crowded public spaces, continuously assessing the invisible vapor trails human bodies leave in their wake.

Unlike traditional bomb-sniffing dogs, which are brought to a specific target-say, a car trunk or a suspicious package-the Vapor Wake dog is meant to foil a particularly nasty kind of bomb, one carried into a high traffic area by a human, perhaps even a suicidal one. In busy locations, searching individuals is logistically impossible, and fixating on specific suspects would be a waste of time. Instead, a Vapor Wake dog targets the ambient air.

As the bombing at the Boston marathon made clear, we need dogs-and their noses.As I approach the mall's central courtyard, where its two wings of chain stores intersect, Suge is pacing back and forth at the end of a lead, nose in the air. At first, I walk toward him and then swing wide to feign interest in a table covered with crystal curios. When Suge isn't looking, I walk past him at a distance of about 10 feet, making sure to hug the entrance of Bath & Body Works, conveniently the most odoriferous store in the entire mall. Within seconds, I hear the clattering of the dog's toenails on the hard tile floor behind me.

As Suge struggles at the end of his lead (once he's better trained, he'll alert his handler to threats in a less obvious manner), I reach into my jacket and pull out a well-chewed ball on a rope-his reward for a job well done-and toss it over my shoulder. Christmas shoppers giggle at the sight of a black Lab chasing a ball around a mall courtyard, oblivious that had I been an actual terrorist, he would have just saved their lives.

That Suge can detect a small amount of TNT at a distance of 10 feet in a crowded mall in front of a shop filled with scented soaps, lotions, and perfumes is an extraordinary demonstration of the canine's olfactory ability. But what if, as a terrorist, I'd spotted Suge from a distance and changed my path to avoid him? And what if I'd chosen to visit one of the thousands of malls, train stations, and subway platforms that don't have Vapor Wake dogs on patrol?

Dogs may be the most refined scent-detection devices humans have, a technology in development for 10,000 years or more, but they're hardly perfect. Graduates of Auburn's program can cost upwards of $30,000. They require hundreds of hours of training starting at birth. There are only so many trainers and a limited supply of purebred dogs with the right qualities for detection work. Auburn trains no more than a couple of hundred a year, meaning there will always be many fewer dogs than there are malls or military units. Also, dogs are sentient creatures. Like us, they get sleepy; they get scared; they die. Sometimes they make mistakes.

As the tragic bombing at the Boston Marathon made all too clear, explosives remain an ever-present danger, and law enforcement and military personnel need dogs-and their noses-to combat them. But it also made clear that security forces need something in addition to canines, something reliable, mass-producible, and easily positioned in a multitude of locations. In other words, they need an artificial nose.

* * *



In 1997, DARPA created a program to develop just such a device, targeted specifically to land mines. No group was more aware than the Pentagon of the pervasive and existential threat that explosives represent to troops in the field, and it was becoming increasingly apparent that the need for bomb detection extended beyond the battlefield. In 1988, a group of terrorists brought down Pan Am Flight 103 over Lockerbie, Scotland, killing 270 people. In 1993, Ramzi Yousef and Eyad Ismoil drove a Ryder truck full of explosives into the underground garage at the World Trade Center in New York, nearly bringing down one tower. And in 1995, Timothy McVeigh detonated another Ryder truck full of explosives in front of the Alfred P. Murrah Federal Building in Oklahoma City, killing 168. The "Dog's Nose Program," as it was called, was deemed a national security priority./>

Over the course of three years, scientists in the program made the first genuine headway in developing a device that could "sniff" explosives in ambient air rather than test for them directly. In particular, an MIT chemist named Timothy Swager honed in on the idea of using fluorescent polymers that, when bound to molecules given off by TNT, would turn off, signaling the presence of the chemical. The idea eventually developed into a handheld device called Fido, which is still widely used today in the hunt for IEDs (many of which contain TNT). But that's where progress stalled.

Olfaction, in the most reductive sense, is chemical detection. In animals, molecules bind to receptors that trigger a signal that's sent to the brain for interpretation. In machines, scientists typically use mass spectrometry in lieu of receptors and neurons. Most scents, explosives included, are created from a specific combination of molecules. To reproduce a dog's nose, scientists need to detect minute quantities of those molecules and identify the threatening combinations. TNT was relatively easy. It has a high vapor pressure, meaning it releases abundant molecules into the air. That's why Fido works. Most other common explosives, notably RDX (the primary component of C-4) and PETN (in plastic explosives such as Semtex), have very low vapor pressures-parts per trillion at equilibrium and once they're loose in the air perhaps even parts per quadrillion.

The machine "sniffed" just as a dog would and identified the explosive molecules. "That was just beyond the capabilities of any instrumentation until very recently," says David Atkinson, a senior research scientist at the Pacific Northwest National Laboratory, in Richland, Washington. A gregarious, slightly bearish man with a thick goatee, Atkinson is the co-founder and "perpetual co-chair" of the annual Workshop on Trace Explosives Detection. In 1988, he was a PhD candidate at Washington State University when Pan Am Flight 103 went down. "That was the turning point," he says. "I've spent the last 20 years helping to keep explosives off airplanes." He might at last be on the verge of a solution.

When I visit him in mid-January, Atkinson beckons me into a cluttered lab with a view of the Columbia River. At certain times of the year, he says he can see eagles swooping in to poach salmon as they spawn. "We're going to show you the device we think can get rid of dogs," he says jokingly and points to an ungainly, photocopier-size machine with a long copper snout in a corner of the lab; wires run haphazardly from various parts.

Last fall, Atkinson and two colleagues did something tremendous: They proved, for the first time, that a machine could perform direct vapor detection of two common explosives-RDX and PETN-under ambient conditions. In other words, the machine "sniffed" the vapor as a dog would, from the air, and identified the explosive molecules without first heating or concentrating the sample, as currently deployed chemical-detection machines (for instance, the various trace-detection machines at airport security checkpoints) must. In one shot, Atkinson opened a door to the direct detection of the world's most nefarious explosives.

As Atkinson explains the details of his machine, senior scientist Robert Ewing, a trim man in black jeans and a speckled gray shirt that exactly matches his salt-and-pepper hair, prepares a demonstration. Ewing grabs a glass slide soiled with RDX, an explosive that even in equilibrium has a vapor pressure of just five parts per trillion. This particular sample, he says, is more than a year old and just sits out on the counter exposed; the point being that it's weak. Ewing raises this sample to the snout end of a copper pipe about an inch in diameter. That pipe delivers the air to an ionization source, which selectively pairs explosive compounds with charged particles, and then on to a commercial mass spectrometer about the size of a small copy machine. No piece of the machine is especially complicated; for the most part, Atkinson and Ewing built it with off-the-shelf parts.

Ewing allows the machine to sniff the RDX sample and then points to a computer monitor where a line graph that looks like an EKG shows what is being smelled. Within seconds, the graph spikes. Ewing repeats the experiment with C-4 and then again with Semtex. Each time, the machine senses the explosive.

A commercial version of Atkinson's machine could have enormous implications for public safety, but to get the technology from the lab to the field will require overcoming a few hurdles. As it stands, the machine recognizes only a handful of explosives (at least nine as of April), although both Ewing and Atkinson are confident that they can work out the chemistry to detect others if they get the funding. Also, Atkinson will need to shrink it to a practical size. The current smallest version of a high-performance mass spectrometer is about the size of a laser printer-too big for police or soldiers to carry in the field. Scientists have not yet found a way to shrink the device's vacuum pump. DARPA, Atkinson says, has funded a project to dramatically reduce the size of vacuum pumps, but it's unclear if the work can be applied to mass spectrometry.

If Atkinson can reduce the footprint of his machine, even marginally, and refine his design, he imagines plenty of very useful applications. For instance, a version affixed to the millimeter wave booths now common at American airports (the ones that require passengers to stand with their hands in the air-also invented at PNNL, by the way) could use a tube to sniff air and deliver it to a mass spectrometer. Soldiers could also mount one to a Humvee or an autonomous vehicle that could drive up and sniff suspicious piles of rubble in situations too perilous for a human or dog. If Atkinson could reach backpack size or smaller, he may even be able to get portable versions into the hands of those who need them most: the marines on patrol in Afghanistan, the Amtrak cops guarding America's rail stations, or the officers watching over a parade or road race.

Atkinson is not alone in his quest for a better nose. A research group at MIT is studying the use of carbon nanotubes lined with peptides extracted from bee venom that bind to certain explosive molecules. And at the French-German Research Institute in France, researcher Denis Spitzer is experimenting with a chemical detector made from micro-electromechanical machines (MEMs) and modeled on the antennae of a male silkworm moth, which are sensitive enough to detect a single molecule of female pheromone in the air.

Atkinson may have been first to demonstrate extremely sensitive chemical detection-and that research is all but guaranteed to strengthen terror defense-but he and other scientists still have a long way to go before they approach the sophistication of a dog nose. One challenge is to develop a sniffing mechanism. "With any electronic nose, you have to get the odorant into the detector," says Mark Fisher, a senior scientist at Flir Systems, the company that holds the patent for Fido, the IED detector. Every sniff a dog takes, it processes about half a liter of air, and a dog sniffs up to 10 times per second. Fido processes fewer than 100 milliliters per minute, and Atkinson's machine sniffs a maximum of 20 liters per minute.

Another much greater challenge, perhaps even insurmountable, is to master the mechanisms of smell itself.

* * *



Olfaction is the oldest of the sensory systems and also the least understood. It is complicated and ancient, sometimes called the primal sense because it dates back to the origin of life itself. The single-celled organisms that first floated in the primordial soup would have had a chemical detection system in order to locate food and avoid danger. In humans, it's the only sense with its own dedicated processing station in the brain-the olfactory bulb-and also the only one that doesn't transmit its data directly to the higher brain. Instead, the electrical impulses triggered when odorant molecules bind with olfactory receptors route first through the limbic system, home of emotion and memory. This is why smell is so likely to trigger nostalgia or, in the case of those suffering from PTSD, paralyzing fear./>

All mammals share the same basic system, although there is great variance in sensitivity between species. Those that use smell as the primary survival sense, in particular rodents and dogs, are orders of magnitude better than humans at identifying scents. Architecture has a lot to do with that. Dogs are lower to the ground, where molecules tend to land and linger. They also sniff much more frequently and in a completely different way (by first exhaling to clear distracting scents from around a target and then inhaling), drawing more molecules to their much larger array of olfactory receptors. Good scent dogs have 10 times as many receptors as humans, and 35 percent of the canine brain is devoted to smell, compared with just 5 percent in humans.

Unlike hearing and vision, both of which have been fairly well understood since the 19th century, scientists first explained smell only 50 years ago. "In terms of the physiological mechanisms of how the system works, that really started only a few decades ago," says Richard Doty, director of the Smell and Taste Center at the University of Pennsylvania. "And the more people learn, the more complicated it gets."

Whereas Atkinson's vapor detector identifies a few specific chemicals using mass spectrometry, animal systems can identify thousands of scents that are, for whatever reason, important to their survival. When molecules find their way into a nose, they bind with olfactory receptors that dangle like upside-down flowers from a sheet of brain tissue known as the olfactory epithelium. Once a set of molecules links to particular receptors, an electrical signal is sent through axons into the olfactory bulb and then through the limbic system and into the cortex, where the brain assimilates that information and says, "Yum, delicious coffee is nearby."

While dogs are fluent in the mysterious language of smell, scientists are only now learning the ABC's.As is the case with explosives, most smells are compounds of chemicals (only a very few are pure; for instance, vanilla is only vanillin), meaning that the system must pick up all those molecules together and recognize the particular combination as gasoline, say, and not diesel or kerosene. Doty explains the system as a kind of code, and he says, "The code for a particular odor is some combination of the proteins that get activated." To create a machine that parses odors as well as dogs, science has to unlock the chemical codes and program artificial receptors to alert for multiple odors as well as combinations.

In some ways, Atkinson's machine is the first step in this process. He's unlocked the codes for a few critical explosives and has built a device sensitive enough to detect them, simply by sniffing the air. But he has not had the benefit of many thousands of years of bioengineering. Canine olfaction, Doty says, is sophisticated in ways that humans can barely imagine. For instance, humans don't dream in smells, he says, but dogs might. "They may have the ability to conceptualize smells," he says, meaning that instead of visualizing an idea in their mind's eye, they might smell it.

Animals can also convey metadata with scent. When a dog smells a telephone pole, he's reading a bulletin board of information: which dogs have passed by, which ones are in heat, etc. Dogs can also sense pheromones in other species. The old adage is that they can smell fear, but scientists have proved that they can smell other things, like cancer or diabetes. Gary Beauchamp, who heads the Monell Chemical Senses Center in Philadelphia, says that a "mouse sniffing another mouse can obtain much more information about that mouse than you or I could by looking at someone."

If breaking chemical codes is simple spelling, deciphering this sort of metadata is grammar and syntax. And while dogs are fluent in this mysterious language, scientists are only now learning the ABC's.

* * *



There are few people who better appreciate the complexities of smell than Paul Waggoner, a behavioral scientist and the associate director of Auburn's Canine Research Detection Institute. He has been hacking the dog's nose for more than 20 years. />

"By the time you leave, you won't look at a dog the same way again," he says, walking me down a hall where military intelligence trainees were once taught to administer polygraphs and out a door and past some pens where new puppies spend their days. The CRDI occupies part of a former Army base in the Appalachian foothills and breeds and trains between 100 and 200 dogs-mostly Labrador retrievers, but also Belgian Malinois, German shepherds, and German shorthaired pointers-a year for Amtrak, the Department of Homeland Security, police departments across the U.S., and the military. Training begins in the first weeks of life, and Waggoner points out that the floor of the puppy corrals is made from a shiny tile meant to mimic the slick surfaces they will encounter at malls, airports, and sporting arenas. Once weaned, the puppies go to prisons in Florida and Georgia, where they get socialized among prisoners in a loud, busy, and unpredictable environment. And then they come home to Waggoner.

What Waggoner has done over tens of thousands of hours of careful study is begin to quantify a dog's olfactory abilities. For instance, how small a sample dogs can detect (parts per trillion, at least); how many different types of scents they can detect (within a certain subset, explosives for instance, there seems to be no limit, and a new odor can be learned in hours); whether training a dog on multiple odors degrades its overall detection accuracy (typically, no); and how certain factors like temperature and fatigue affect performance.

The idea that the dog is a static technology just waiting to be obviated really bothers Waggoner, because he feels like he's innovating every bit as much as Atkinson and the other lab scientists. "We're still learning how to select, breed, and get a better dog to start with-then how to better train it and, perhaps most importantly, how to train the people who operate those dogs."

Waggoner even taught his dogs to climb into an MRI machine and endure the noise and tedium of a scan. If he can identify exactly which neurons are firing in the presence of specific chemicals and develop a system to convey that information to trainers, he says it could go a long way toward eliminating false alarms. And if he could get even more specific-whether, say, RDX fires different cells than PETN-that information might inform more targeted responses from bomb squads.

After a full day of watching trainers demonstrate the multitudinous abilities of CRDI's dogs, Waggoner leads me back to his sparsely furnished office and clicks a video file on his computer. It was from a lecture he'd given at an explosives conference, and it featured Major, a yellow Lab wearing what looked like a shrunken version of the Google Street View car array on its back. Waggoner calls this experiment Autonomous Canine Navigation.
Working with preloaded maps, a computer delivered specific directions to the dog. By transmitting beeps that indicated left, right, and back, it helped Major navigate an abandoned "town" used for urban warfare training. From a laptop, Waggoner could monitor the dog's position using both cameras and a GPS dot, while tracking its sniff rate. When the dog signaled the presence of explosives, the laptop flashed an alert, and a pin was dropped on the map.

It's not hard to imagine this being very useful in urban battlefield situations or in the case of a large area and a fast-ticking clock-say, an anonymous threat of a bomb inside an office building set to detonate in 30 minutes. Take away the human and the leash, and a dog can sweep entire floors at a near sprint. "To be as versatile as a dog, to have all capabilities in one device, might not be possible," Waggoner says.

Both the dog people and the scientists working to emulate the canine nose have a common goal: to stop bombs from blowing up.It's important to recognize that both sides-the dog people and the scientists working to emulate the canine nose-have a common goal: to stop bombs from blowing up. And the most effective result of this technology race, Waggoner thinks, is a complementary relationship between dog and machine. It's impractical, for instance, to expect even a team of Vapor Wake dogs to protect Grand Central Terminal, but railroad police could perhaps one day install a version of Atkinson's sniffer at that station's different entrances. If one alerts, they could call in the dogs.

There's a reason Flir Systems, the maker of Fido, has a dog research group, and it's not just for comparative study, says the man who runs it, Kip Schultz. "I think where the industry is headed, if it has forethought, is a combination," he told me. "There are some things a dog does very well. And some things a machine does very well. You can use one's strengths against the other's weaknesses and come out with a far better solution."

Despite working for a company that is focused mostly on sensor innovation, Schultz agrees with Waggoner that we should be simultaneously pushing the dog as a technology. "No one makes the research investment to try to get an Apple approach to the dog," he says. "What could he do for us 10 or 15 years from now that we haven't thought of yet?"

On the other hand, dogs aren't always the right choice; they're probably a bad solution for screening airline cargo, for example. It's a critical task, but it's tedious work sniffing thousands of bags per day as they roll by on a conveyor belt. There, a sniffer mounted over the belt makes far more sense. It never gets bored.

"The perception that sensors will put dogs out of business-I'm telling you that's not going to happen," Schultz told me, at the end of a long conference call. Mark Fisher, who was also on the line, laughed. "Dogs aren't going to put sensors out of business either."

Josh Dean lives in Brooklyn and is the author of Show Dog: The Charmed Life and Trying Times of a Near-Perfect Purebred.

    



The Energy Fix: When Will The U.S. Reach Energy Independence? [Infographic]

$
0
0
What government forecasts suggest about U.S. energy independence

Since long before the rise in big data, the U.S. Energy Information Administration has tracked the country's energy consumption and production [thick lines]. The size of the gap between the two reflects how close the country is to energy independence. The EIA also projects energy production and usage into the future to help guide industry regulations and policy decisions. A computer program-which took the EIA nearly two decades to build and requires 35 analysts to run-generates its predictions [thin lines] based on current energy laws and regulations. While it's impossible to predict influential events such as wars and recessions, the general trend suggests that since 2005-when the energy deficit [red] peaked-the U.S. is making more of its own energy and using less overall. "We as a society are valuing energy independence more," said Steven Wade, an economist for the EIA.

    


How The Turtle Got Its Shell

$
0
0
The Indian Star Tortoise

Wikimedia Commons

Pictured: a turtle with a really cool shell.

The turtle shell isn't like any other protective element of any living animal: it's not an exoskeleton, like some invertebrates have, nor is it made of ossified scales like armadillos, pangolins, or some snake and reptile species. It's not made of skin. It can't be removed--to do so would kill the turtle, and all that's underneath is internal organs. So how did this very common animal end up so unique?

New research from Tyler Lyson at Yale University furthers an existing theory: that turtles stem from a 200-million-year-old dinosaur called Eunotosaurus. Eunotosaurus looks sort of like a cross between a turtle and a lizard, or like a lizard that's somehow swallowed a cannonball:

The turtle shell is actually a peculiar evolution of a turtle's bone structure. Its vertebrae, pelvis, ribcage--it has no muscles between its ribs, which makes this easier--and other bones fuse together to form a sort of reptilian exoskeleton. The scale-like pieces which make the turtle shell look like a soccer ball are called scutes, and they have individual names, often based on location, after what they'd be called in a less bizarre animal (anal scute, pectoral scute, that kind of thing).

Turtles are very old, appearing in the fossil record about 210 million years ago. But the shell was already full formed in those fossils. Where the hell did the turtle come from?

Lyson analyzed more than 45 Eunotosaurus fossils, as Eunotosaurus has a sort of proto-shell in which its ribs are touching, like the turtle and unlike any other reptile. Then he put together this animation, which shows the organic and natural way that Eunotosaurus could have led to the turtle:

You can see the bones of the more typically reptilian early relatives broadening, joining together, and moving to the top and bottom of the animal, forming a protective barrier. And, interestingly, this process is mirrored in the development of turtle embryos. "The first thing we see in a developing turtle embryo is the broadening of its ribs, followed by the broadening of its vertebrae, and finally by the acquisition of the osteoderms along the perimeter of the shell," Lyson says.

The paper appears in the latest issue of Current Biology.

    


USDA Aims To Grow White Rice With All The Nutrients Of Brown Rice

$
0
0
Checking the Rice

Researchers examine rice plants. Geneticist Shannon Pinson is in the foreground.

Photo by Stephen Ausmus

The rice would help those who suffer from mineral deficiencies in developing countries, but the agency hopes U.S. shoppers will bite, too.

In Malaysia, there grow four types of rice with more molybdenum-a mineral that helps rice plants deal with acidic soil-than any other rice on Earth. In other parts of world, different varieties of rice are naturally richer in calcium, potassium, iron and other minerals people need.

At the U.S. Department of Agriculture, scientists have tested 1,643 types of rice from around the world to find the ones that are the most nutritious. "It's like, where in the world are the genes we're looking for?" Shannon Pinson, a USDA geneticist, tells Popular Science.

In the future, the USDA hopes such knowledge will help breeders create rice varieties that may help with mineral deficiencies in developing countries where rice is a staple. Grown-in fortification might find a market among U.S. shoppers, too. For one thing, it would mean white rice could have the same nutrients that now only appear in brown rice, or in white rice that's enriched after the fact with minerals added to its surface.

Interestingly, the USDA's plan is to help breeders grow rice with the minerals they want, not to genetically engineer it. Once scientists find the genes that are responsible for mineral levels-the next step in their research-they'll hand that information over to plant breeders. "I'm right next door to the breeder at the University of Arkansas," Pinson says. "I've got stuff in her fields and she's got stuff in my fields."

Breeders create new varieties of rice the old-fashioned way, by reproducing only the plants with the genes they want.

Pinson's lab's avoidance of genetically modified rice isn't about whether GMO foods are good or bad, she says. She simply doesn't have the facilities to genetically modify rice. "In fact, I don't think GMOs are a problem," she says. "My personal opinion is, I would eat them." But she doesn't study them.

Instead, Pinson's line research is at once old and new. Of course, humans have bred and selected for the plants they want for as long as they've farmed. Even identifying and targeting specific genes is a well-known technique that researchers have honed since the 1980s. Most of the rice, as well as the corn and wheat, that Americans buy in grocery stores have benefitted from these non-engineering genetic techniques, Pinson says. That nice, dry, fluffy texture American rice has? That came from work done by researchers like Pinson.

Making mineral-enriched grains is more difficult, however, because it involves many genes. It also involves many interacting minerals. You don't want to increase the calcium in rice, for example, only to decrease magnesium at the same time. So cooked rice texture, which is controlled by one gene, came first. Then resistance to a fungus called blast. Mineral content is a farther frontier; Pinson guesses people won't see high-calcium or high-iron rice in supermarkets for another 20 or 30 years.

The idea of genetically enriching rice to deal with malnutrition has a notorious predecessor. In 1999, researchers invented the first variety of golden rice, which was genetically engineered to produce vitamin A. It was supposed to help kids in developing countries who don't get enough of the nutrient. It met with fierce opposition from groups ranging from Greenpeace to local groups where the rice was supposed to go. Research on golden rice is still ongoing, but with its setbacks, it's progressed much slower than originally promised.

Unlike mineral-enriched rice, golden rice cannot be bred, as no varieties of rice produce vitamin A on their own. It's got to be engineered in.

Pinson doesn't have a lot to say about golden rice, besides that she would eat it. She says her research group is sensitive to two major market forces: One, rice appears in many American baby foods and even those who don't normally buy organic often prefer organic baby food. GMO rice can't be organic, but bred rice, even if it's bred using the genetic knowledge Pinson develops, may still be grown organically. Two, roughly half of the rice the U.S. grows is exported, and many countries don't wish to import GMO foods.

    


First Images Of How A Molecule's Structure Changes In A Reaction

$
0
0
Before and After the Reaction

UC Berkeley

Visualizing chemistry is awesome!

Researchers have for the first time captured atomic-scale images of molecules before and after a chemical reaction--a breakthrough that will help researchers and students better visualize chemistry and could eventually lead to improved electronics.

Traditionally, scientists have to infer how a molecule's structure changes in a reaction. "In chemistry you throw stuff into a flask and something else comes out, but you typically only get very indirect information about what you have," lead researcher Felix Fischer, a UC Berkeley assistant professor of chemistry, says in a press release. "You have to deduce that by taking nuclear magnetic resonance, infrared or ultraviolet spectra. It is more like a puzzle, putting all the information together and then nailing down what the structure likely is. But it is just a shadow. Here we actually have a technique at hand where we can look at it and say this is exactly the molecule. It's like taking a snapshot of it."

While trying to build new graphene nanostructures, Fischer and his colleagues were able to visualize the exact structure of a molecule--right down to the chemical bonds between atoms--and how that structure changes during a reaction.

The key? A technique called "noncontact atomic force microscopy." The ultra-precise carbon molecule tip of a microscope traces the electron bonds between atoms in the molecule, creating an image almost like a leaf rubbing.

Here's a better look:

The implications are both simple ("visualizing chemistry is awesome!") and elaborate: precisely positioned nanostructures of graphene allow for the construction of absurdly small machines. But to place them precisely, they first have to be visualized. "The atomic force microscope gives us new information about the chemical bond, which is incredibly useful for understanding how different molecular structures connect up and how you can convert from one shape into another shape," says Michael Crommie, a UC Berkeley professor of physics. "This should help us to create new engineered nanostructures, such as bonded networks of atoms that have a particular shape and structure for use in electronic devices. This points the way forward."

The images were published in Science Expresslast week.

    


Check Out This Giant Inflatable Hangar For A Solar-Powered Plane

$
0
0
Solar Impulse Inflatable Hangar

Solar Impulse

Instead of flying out of Dallas and touching down in a reserved hangar, the sun-powered plane Solar Impulse will land in an inflatable structure for part three of its trans-America flight.

Solar Impulse, the solar-powered plane currently flying across the U.S., left Texas today on the third leg of its cross-country voyage. But it'll have to run an audible when it touches down in St. Louis after midnight tonight: the hangar reserved for the aircraft was battered by storms in the area, and the plane will land with a special inflatable hangar instead.

Solar Impulse has been motoring along since it took off from San Francisco in early May, making stops in Phoenix and Dallas. Before takeoff, the team calculated the cross-country trip to New York would take two months. If the plane hadn't left Dallas this morning, it would run the risk of being grounded there for a whole week because of Midwest weather. That would mean the entire two-month timetable would be missed.

So the inflatable structure, created for an around-the-globe flight scheduled for 2015 but not yet used in real-world conditions, will be set up by the Solar Impulse ground team before the aircraft lands, 21 hours after today's takeoff. The hangar is a water-proof, fire-proof structure that can handle battering from winds up to 62 mph. It's definitely not the first inflatable airplane hangar, and with 4 hours of build time required, it's probably not the easiest to set up, either. But it does have some features that Solar Impulse needs. The hangar is translucent, meaning Solar Impulse can soak up some rays and recharge while it's docked inside, and it's also wide enough to accommodate the plane's 208-foot wingspan.

After touching down in the hangar, Impulse will switch pilots (two of them are alternating on each leg) and make its way to Washington, D.C., and then onward to New York, where it'll complete its trans-America flight.

    


These Artificially Intelligent Legos Look Awesome

$
0
0
Motorized Legos fight under human or computer command.

For better or worse, Legos haven't changed all that much since the company's founding back in 1932. But a partnership with Sony might change the classic bricks into semi-autonomous machines.

IDG News Service took a tour of Sony Computer Science Laboratories in Tokyo, and found a series of wired Legos, complete with cameras, motors, and a dash of artificial intelligence, all stuffed inside special bricks. As part of a demonstration, a motorized Lego platform controlled by a computer squared off against a platform controlled by a human with a PlayStation controller. The computer's platform used a camera to locate and chase down the human's platform. Using the same technology, parts of a brick environment were programmed to explode when they detected motion, creating a Lego minefield. That's as close to a videogame like LittleBigPlanet Karting as it is to Legos.

This project's still in the experimental phase, so it'll likely be quite a while before anyone can pick up a kit from the store. In the meantime, let's put that technology into this full-size Lego X-wing and get real-life Star Wars.

[Network World]

    


Where Does Foreign Aid Go? [Infographic]

$
0
0
Foreign Aid to Iraq in 2005

Hannah Davis

The most fascinating part of this infographic is what it obscures.

This interactive infographic by NYU graduate student Hannah Davis shows the global distribution of all foreign-aid spending (so not just American) for every year from 1960 to 2010. Play around with it, and it's like watching a Ouija board of countries battling for influence.

Sometimes the directional pull of foreign aid is obvious, as in the image above. Iraq dominates the chart for 2005--at the height of the Iraq War--soaking up more than a fifth of foreign aid.

Other times, the noise itself is the picture. Aid in 1993, after the collapse of the USSR and before the major humanitarian efforts of the late 1990s, is widely dispersed, with only China commanding more that 5 percent of the global total. Rwanda, highlighted here, was just about to undergo the darkest chapter in its history. Aid to Rwanda almost doubled in 1994, no doubt in response to the genocide that took place.

The most fascinating part of this infographic is what it obscures. Between 1960 and 2010, global spending on foreign aid increased 30-fold (granted, that's without adjusting for inflation). The pie chart can't really show that -- it's great at contrasting relative amounts of aid, but in the time covered the pie grew, so relative sized slices aren't the complete picture.

You can find the infographic here.

    



Watch A Terrifying, Beautiful Electrical Storm From A Plane Window

$
0
0
Terrifying for anyone on the plane, but an awesome video for us back on solid ground.

If you've got even the slightest fear of flying, this might not be what you want to see when you look out the window. We don't know too much about the circumstances, but apparently passengers on board a plane from Miami to Dallas/Fort Worth looked out their windows at some point and were treated to this view of an electrical storm, which went on for about 30 scary, lovely minutes.

[The Weather Channel]

    


Make Your Own Small-Batch, Artisanal High Fructose Corn Syrup

$
0
0
DIY High-Fructose Corn Syrup

via Bon Appetit

Delicious DIY sugarglop.

Think of corn on a hot summer's day. Sweet, delicious, all-American corn. It's stuck in your teeth and you barely care, as your practiced jaw scrapes the pure taste of the season from the cob.

Now think of high-fructose corn syrup. You probably don't have an idyllic childhood memory to go along with the sugarglop that's killing both the American people and the American tradition of agriculture. Just a guess!

HFCS, as it's called, is a relentlessly common sweetener in everything from soda to bread. It's used because, in bulk, it's incredibly cheap, thanks to agricultural subsidies from the US government that encourage farmers to grow the high-yield, flavorless-when-unprocessed variety of corn used for HFCS. One of the many odd things about high-fructose corn syrup is that you can't really buy the pure stuff in a store. What's even in the stuff? That's what artist and designer Maya Weinstein wondered--except she actually secured the ingredients for her thesis project at Parsons.

The DIY HFCS Kit includes all of the delicious materials that go into our country's finest weirdest sweetener: glucose isomerase, sulfuric acid, alpha amylase, and more (those enzymes are used to convert the glucose to fructose). It's not that hard to make: basically, just combine everything in the kit besides the glucose isomerase, strain through cheesecloth, heat, add glucose isomerase, boil, and cool.

Weinstein went over to the Bon Appetit offices to demonstrate her kit, and, interestingly, it was a hit with the editors there. Before it's filtered to remove its yellow color and much of its flavor, it "tastes like corn candy," according to the editor of Bon Appetit's site. The filtering removes any extant flavors that might not go with whatever you want to sweeten, and in mass production, this syrup is often purified to become as high as 90% fructose--nearly pure sweetness, basically.

The project isn't really for mass consumption; the kit itself costs $70-80 in raw materials to yield just a small jar of corn syrup. Corn syrup is used by so many food producers because it's cheap in bulk, and because the government gives lots of rebates and tax incentives to use the stuff--in small doses, it's cheaper to go with honey or plain sugar. But it's a pretty interesting experiment to see how something we all eat all the time is actually made. Check out the project here.

[via Bon Appetit]

    


Astronomers Find The Lightest Exoplanet Ever Caught On Camera

$
0
0
Exoplanet HD 95086 b, next to its parent star

The star itself was removed from the picture during processing to enhance the view of the faint exoplanet, which appears at the lower left.

ESO's Very Large Telescope

The exoplanet's predicted mass is only four to five times that of Jupiter.

The image above contains what scientists believe is the lowest-mass exoplanet ever to be caught on camera. Called HD 95086 b, the newly discovered planet orbits a young star about 300 light-years from Earth.

Based on the exoplanet's brightness, astronomers predict it has a mass just four to five times that of Jupiter. The planet's host star, which is slightly bigger than our sun and surrounded by a disc of debris, is fairly youthful-only 10 million to 17 million years old (the sun is about 4.6 billion years old).

"The brightness of the star gives HD 95086 b an estimated surface temperature of about 700 degrees Celsius," says Gaël Chauvin, a researcher at the Institut de Planetologie et d'Astrophysique de Grenoble in France. "This is cool enough for water vapor and possibly methane to exist in its atmosphere."

To capture an image of such a faint, distant object, astronomers used an adaptive optics instrument mounted on the European Southern Observatory's Very Large Telescope. The instrument allowed the scientists to boost the contrast between the exoplanet and its much brighter star, which had to be removed from the image to make the exoplanet more visible.

According to the researchers, HD 95086 b may have formed from the debris disc around the parent star. The planet is 56 times farther away from its star than Earth is from the sun.

Read the full paper here.

    


FYI: Do We Really Get Cold Feet When We Have The Jitters?

$
0
0
Julia Roberts as Maggie Carpenter, the queen of cold feet, in Runaway Bride

YouTube

Turns out the phrase "cold feet" actually has some, erm, scientific footing.

We all remember Julia Roberts in Runaway Bride and Matthew McConaughey in The Wedding Planner. Their stories may vary slightly, but they have one thing in common (well, besides the standard romantic comedy clichés): both characters had cold feet before their wedding.

We've all heard the term "cold feet" before. Maybe we've even experienced it ourselves. It's when we decide - usually at the very last minute - that we cannot go through with a major life change. There's stress, panic and often anxiety. Do our feet literally get cooler under these situations?

Scientists say yes.

Our bodies regulate our reactions to stress by modifying body temperature. Internal body temperature is based on the role of the proteins and blood in each individual cell. Within each single cell of our body (whether that cell is in our feet or in our brain) is a Rap1A protein. When the brain signals a reactor in our cell, that protein activates and shifts from one area of the cell to another.

In June 2012, a team of scientists from the Research Institute at Nationwide Children's Hospital identified this interaction between the protein molecules and the receptors as the main biological cause of icy extremities.

"When we exposed the cells with chemicals that activated the Rap1A with a receptor, we found a shift in the cell's nucleus and a change in the cell's skeleton," says Dr. Maqsood Chotani, a principal investigator at the hospital. "We identified this receptor as the alpha-2C receptor and have found that it responds at times of stress to conserve body heat.

"So when people are stressed, their brain will release stress hormones to specific cells in the body, which have the potential to activate the a2c receptor," Chotani says.

When the brain signals these receptors and shifts the proteins, the structure of the cells changes. The body reacts to this potential attack by then transporting blood from the extremities, like hands and feet, to vital organs like the brain, lungs and heart.

"The brain tells the sympathetic nerves to release a chemical known as norepinephrine. [This then] tells the adrenal to release a related chemical known as epinephrine," says Dr. Martin Michel, a professor of medicine and scientific affairs at the Johannes Gutenberg University in Mainz. He adds that these two processes then activate the a-2C receptor.

"The body reaction to stress aims to maximize our chances to survive in the face of such threats," Michel says. "Our heart beats faster and our circulating blood is redistributed to those parts of the body which acutely need it most such as the heart and skeletal muscle. This is at the expense of other body parts such as gut and skin, which are less critical to the acute stress reactions of flight, fight and fright." The decrease in internal temperature in the hands and feet shifts the blood distribution and makes the skin cool and clammy.

So those cold feet that you feel before an impending decision are just your body's natural way of protecting you from possible harm. The common pre-wedding jitter expression actually has some scientific footing.

This story was produced in partnership with Northwestern University's Medill School of Journalism. For more FYIs, go here.

    


Did Neil Armstrong's Ohio Accent Obscure The 'A' In His Famous Quote?

$
0
0
Neil Armstrong During Apollo 11

NASA/via Wikimedia

Researchers say the Midwestern astronaut might really have said "one small step for a man."

Even after all these years, there's some debate on what, exactly, Neil Armstrong said when he landed on the moon. Armstrong had said on the record that the quote was the grammatically correct, "One small step for A man, one giant leap for man kind." But back on Earth, most people thought the "a" was omitted. Maybe Armstrong flubbed the quote, or it didn't come through clearly on Earth. Or maybe, as a team of speech scientists is now suggesting, Armstrong's accent was the problem.

The researchers, from Ohio State University and Michigan State University, say the "a" could've been short and blended in to the earlier part of the quote: something like, "One small step 'frrr(uh)' man." That, the team says, would be consistent with the accent Armstrong would have from growing up in central Ohio. Combine that accent with the poor audio being broadcast from Earth to the moon, and you've got a recipe for a whole planet mishearing history.

To test that hypothesis, the researchers dug through an archive with conversations from 40 people raised in Columbus, Ohio. (Armstrong, from nearby Wapakoneta, had a similar accent.) People said "for a" 191 times. The researchers then measured the time it took those people to say "for a" versus just "for." Turns out, it doesn't take Ohioans much longer to say "for a" than it does "for," which indicates some blending of the words. Armstrong's frrr(uh) clocked in at 0.127 seconds, meaning it fell in that blended range.

Does that mean Armstrong definitely added that "a" in his famous quote? Not necessarily, but it does bolster the idea that he did, and that one of the best-known quotes in American history is technically correct.

[SPACE.com]

    


Viewing all 20161 articles
Browse latest View live




Latest Images