Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


older | 1 | .... | 172 | 173 | (Page 174) | 175 | 176 | .... | 732 | newer

    0 0

    Electrocuting animals, stealing credit, self-aggrandizing, and more


    GoDoLearn

    Unfortunately, scientific genius doesn't necessarily come with superhuman social skills. The great minds on this list may have made significant contributions to their fields, but even hundreds of years later, we're still shaking our heads at their petty, ruthless, dishonest, and downright cruel behavior.

    1. Thomas Edison, The Animal Torturer

    In 1884, Nikola Tesla moved to New York City to meet Edison, who was famous for his low-voltage, direct-current electricity. Tesla believed the higher-voltage alternating current electricity was superior and suggested creating an AC-powered motor, but Edison claimed it was too dangerous. Instead, Edison promised the recent immigrant $50,000 (over $1 million today) if he could improve upon his DC generators, or "dynamos."

    After toiling for several months and making significant advances, he returned for his reward, only for Edison to say, "When you become a full-fledged American, you will appreciate an American joke." Tesla quit-but the bullying didn't stop there.

    George Westinghouse had purchased Tesla's patents and became the pioneering force behind AC power and its widespread implementation. Edison, who was ideologically and financially invested in his own DC power, began a publicity campaign against AC power. The campaign was ruthless; he wanted to prove that the high voltage of AC power was too dangerous for public use, so he and his cohorts began publicly electrocuting animals-stray dogs and cats, cattle and horses, and even, notoriously, "Topsy" the elephant.

    The story gets worse. Edison was asked whether electrocution was a humane method of execution. In reply, he claimed that with Westinghouse's AC power, it was indeed a humane and reliable execution. Westinghouse of course tried to prevent such an association, but Harold Brown, one of Edison's employees, was hired by the state of New York to build the first electric chair. Obviously, he used AC power.

    The execution-the first use of the electric chair-took place on August 6, 1890. AC power proved neither reliable nor humane. The first, 17-second-long charge failed to kill the man, an alleged axe murderer; after waiting for the generator to recharge and amping up the voltage, the next charge at last brought an end to the horrible, 8-minute long ordeal. Westinghouse, disgusted, reportedly said, "They would have done better with an axe."

    Despite all that, AC won out in the battle of currents. Its higher voltage meant it could travel much greater distances, which made it easier and cheaper for widespread use, and could be made safer for households with an alternator. Edison's extensive and cruel efforts to defeat his opponents and prove his own invention's superiority were in vain, and near the end of his life he finally owned up to that fact.

    If you need a laugh after all that morbidity, check out the "Drunk History" version of this story.

    2. Richard Owen, The Egomaniac

    Nineteenth-century naturalist Owen is known for his many contributions to science-he was the impressive paleontologist who coined the term "dinosaur"-but also for being a jerk to his peers. He is described, among many other scathing summations, as "vain, arrogant, envious, and vindictive"-an egomaniac.

    One example is his long rivalry with fellow paleontologist Gideon Mantell. Their initial cooperative relationship went awry; they competed tirelessly in print, each trying to oust the other as the preeminent dinosaur discoverer. According to some, Owen relied heavily on Mantell's work throughout his career, but due to his own excessive pride, he was never willing to admit it.

    In 1844, Owen was awarded the Royal Medal of the Royal Society for a paper that Mantell felt was full of errors (and which, it turned out, totally was). By 1848, Mantell had published a paper to illuminate those errors, and Owen retaliated with an indignant verbal assault. The next year, Mantell himself was nominated for a Royal Medal, and Owen tried in vain to dissuade the council from granting it to him. Mantell described Owen as "overpaid, over-praised, and cursed with a jealous monopolising spirit!"

    Yet Mantell was not the only target of Owen's contemptuous treatment; other victims included none other than Charles Darwin and his "bulldog," Thomas Henry Huxley. Owen attempted to preclude Darwin's Origin of Species with his own article arguing that humans were entirely unrelated to apes; then, after Darwin released the book, Owen published an anonymous, lengthy, harsh refutation. Huxley, upon learning of Owen's ways, expressed his desire to "nail... [Owen]... that mendacious humbug... like a kite to the barn door."

    Many other contemporaries were treated similarly, especially those who threatened his own esteem with groundbreaking findings. Later in his life, his reputation finally caught up with his oversized ego and continues to do so today.

    3. William Shockley, The Fraud

    Shockley is the guy who brought silicon to Silicon Valley. He was a physicist, an inventor, a Nobel laureate, and...reviled.

    After leaving MIT in 1936 with a PhD, Shockley joined Bell Laboratories (now AT&T). Soon, WWII started up and he joined the war effort as a civilian scientist. He proved himself quite useful to both the Navy and the Army Air Corps, and after the war he won the National Medal of Merit. Following the war, Bell Labs put Shockley in charge of a team to research semiconductors, which they thought could replace inefficient vacuum tubes to control electricity.

    Shockley tried in vain to construct such a device. After months of trial and error and hard work, two of his co-workers-John Bardeen and Walter Brattain-succeeded, building the first functioning transistor in 1947. Shockley erupted when he learned that his name wasn't tagged on the discovery's patent, and eventually Bell Labs caved and added his name. The photo above, showing the trio and the transistor, "marked the first and last time William Shockley ever laid hands" on the transistor and for that reason was "always detested" by Brattain, according to the Los Angeles Times. He continued to be credited-sometimes even solely-with the discovery for years.

    His undeserved fame wasn't even enough for him. In 1956 he started his own semiconductor company in Palo Alto, California. He charmed and hired an impressive team of scientists, but soon his tone changed; he would publicly fire people or demote people with PhDs to lowly production jobs, the Los Angeles Times reports. But in November of that year, Brattain, Bardeen, and Shockley were awarded the Nobel Physics Prize. Just over a year later, all eight members of his struggling company's research group resigned. They would eventually found Intel.

    The later years of Shockley's life brought out the worst in him. In 1963, he was hired as a professor of engineering by Stanford. Soon thereafter, he began a two-decade long tirade of touting eugenics and certain races' genetic inferiority, snippets from which further paint an ugly, self-aggrandizing picture of the man. He had lost his friends and fame by the time he died in 1989.

    4. Gottfried Wilhelm Leibniz and 5. Isaac Newton, The Frenemies

    (It might not be fair to call either of these two a bully, but their feud was remarkable and at times, ugly.)

    Today, consensus seems to credit both Newton and Leibniz with the invention, or discovery, of calculus.

    The backstory is much more convoluted, though. Newton (four years older than Leibniz) had been developing his own calculus for longer than Leibniz, and he had entrusted a few of his preliminary papers in the hands of John Collins (a government clerk in London and, "in modern parlance, a scientific groupie," as Brian E. Blank tells PopularScience.com).

    By the 1670s, Newton and Leibniz were both working on their own versions of calculus. Collins and Henry Oldenburg (the secretary of the Royal Society) encouraged Newton to write Leibniz. Leibniz responded "somewhat cagily," Blank says. As Newton wrote his reply (remember: correspondence used to take time), Leibniz, intrigued, pursued his competitor's work further while in London. He tracked down and studied the trove of Newton's work in Collins' possession.

    Historians interpret Newton's reply to Leibniz in various lights: some say that Newton was friendly and encouraging, sharing his progress and helping Leibniz; others claim he was paranoid and guarded. Yet it was "obviously circumspect to all," explains Blank, and "effectively ended the correspondence."

    Yet, when Leibniz published his famed paper in 1684, he failed to note Newton's contributions whatsoever. Newton, busy with other work and largely uninterested in publishing, did not initially react. But in 1699, their feud began: Nicolas Fatio de Duillier, a mathematician close to Newton, publicly expressed his opinion that "Newton was the first and by many years the most senior inventor of the calculus… As to whether Leibniz, its second inventor, borrowed anything from him, I prefer to let those judge who have seen Newton's letters."

    Five years later, in 1704, Newton published Opticks along with De quadratura. Leibniz anonymously reviewed the work the following year, and his message was condescending and deceitful: he wrote as if he were praising Newton for discovering an alternative way to do what he had already done. But: "From his correspondence with Newton, Leibniz knew that Newton was first by about 10 years," Blank explains.

    Five years passed once more, until a 1710 paper (by an astronomer, John Keill) espoused Newton as the true discoverer: After praising Newton's brilliance, Keill claimed that "the same arithmetic… was afterwards published by Mr. Leibniz in the Acta Eruditorum having changed the name and the symbolism." Leibniz reached out to the Royal Society-the president of which was none other than Newton-calling for a retraction.

    Newton then learned of Leibniz's arrogant, anonymous review of Opticks, and his indignation was spurred. He wrote the Society's report on the dispute himself, and ensured international circulation. Of course, he hailed himself as the "first inventor." Not long after, he anonymously published a follow-up account to the original report, "Account of the Book entituled Commercium Epistolicum," and had it translated to Latin and French while provoking good reviews. Newton wanted the word out there that he was the true inventor, and wanted history to remember that fact. He continued to pursue that goal for years after Leibniz's death in 1716.


        







    0 0

    Sunrise over the Aleutians

    Photo taken by the Expedition 36 crew on the International Space Station

    Plus, some cool clouds that form only high above the Earth over polar regions

    Here's what the land of the midnight sun actually looked like at 12:15 am local time, August 4. Astronauts in the International Space Station snapped this photo as they crossed over the Aleutian Islands. If the foreground of the photo weren't dark, you would see western Alaska in it.

    Here, the sun is just about to rise over the Aleutians. Above the sun are noctilucent clouds, also known as polar mesospheric clouds. Noctilucent clouds appear 47 to 56 miles (75 to 90 kilometers) above the surface of the Earth, which is five to six times higher than the altitudes at which rainclouds form. See the orange line at the horizon on the left part of the photo? That's the layer in the atmosphere where rainclouds form, whereas the wispy blue stuff above the sun are noctilucent clouds.

    Some data suggest noctilucent clouds now appear brighter and at lower latitudes than in the past, perhaps due to global warming, NASA reports.

    Earthbound folks can get a look at noctilucent clouds by visiting polar regions during the summer. One man recently recorded a gorgeous time-lapse video of noctilucent clouds, plus the aurora, in northern Scotland:

    The spot of red at the bottom of the photo is a reflection, possibly from the camera lens, the window frame or an object inside the space station, NASA reports.

    [NASA Earth Observatory]


        







    0 0

    Ron Swanson Is a Fictional Character

    Thinking about becoming a meatarian? Just know that Ron Swanson may not be a nutritional expert

    NBC

    Good ol' meat and potatoes. Without the potatoes.

    You might get scurvy, like a pirate. Cooked meat contains very little vitamin C, notes Donald Beitz, a nutritional biochemist at Iowa State University. Without the vitamin, scurvy would bring on rashes and gum disease, not to mention very bad breath. Moreover, meat lacks fiber, so you'd probably be constipated. All in all, you wouldn't be healthy or comfortable.

    That said, some groups of people have survived-even thrived-on an animal-only diet. Research suggests that traditionally the Inuit ate any number of meats, including seal, whale, caribou and fish. But they rarely, if ever, ate plant fiber. The key to their success, says Harriet Kuhnlein, the founding director of the Centre for Indigenous Peoples' Nutrition and Environment at McGill University in Montreal, was eating every part of the animal, "and you have to eat some of it raw." Raw meat contains vitamin C (which is lost when cooked), and the skin, hooves and bones contain fiber. For greens, Kuhnlein adds, traditional Inuit "ate the stomach contents of caribou and deer."

    Historically, they were quite healthy, she says; they almost never suffered from heart disease. Today, the meat-heavy diet lives on in the form of the controversial Atkins Nutritional Approach. Formulated by cardiologist Robert Atkins in the early 1970s, the diet prescribes that adherents dramatically cut their carbohydrate intake.

    The American Heart Association issued a statement in 2001 condemning the diet for cutting necessary sources of nutrients, stating that devotees were "at risk for compromised vitamin and mineral intake, as well as potential cardiac, renal [kidney], bone and liver abnormalities."

    This article originally appeared in the June 2011 issue of Popular Science magazine..


        







    0 0

    Asian Palm Civet Eating Coffee Berries

    Leendertz on Wikimedia Commons

    Because that's what you want, right?

    When you pay $150 to $230 for a pound of exotic coffee, you want to know it's the real deal. Now, a team of chemists say they've developed a way to verify that beans labeled as "civet coffee" are authentic.

    What is civet coffee and what makes it so expensive? Normally, coffee farmers and processors pick the berries from their coffee trees, remove the fruits' flesh from their seeds, ferment the seeds, and then wash, dry and roast the seeds. For civet coffee, a cute Southeast Asian forest creature called the Asian palm civet helps out with some of these steps. Civets selectively eat the best ripe coffee berries, and their digestive tracts strip the fruit from the beans. The civets then excrete the beans, which civet coffee farmers gather, wash, ferment and roast. Regular little Santa's helpers, civets are!

    So. Civet coffee, also known by its Indonesian name, Kopi Luwak, needs to be gathered by hand from civet poop, making it rare and labor-intensive to produce. That explains its price… as well as some coffee sellers' inclination to try to market non-civet-processed coffee as Kopi Luwak, or to cut true Kopi Luwak with regular coffee. Those scams inspired a team of Japanese and Indonesian researchers to come up with a way to chemically distinguish regular coffee from civet coffee.

    The researchers analyzed Kopi Luwak that they produced (presumably with civets in lab), as well as commercially sold Kopi Luwak and commercially sold regular coffee beans from different regions in Indonesia. They used gas chromatography and mass spectrometry, both techniques that tell chemists what molecules appear in a sample, to conduct a metabolomic analysis of the coffees. They found that digested coffee beans had significantly different levels of certain acids than non-digested beans. Perhaps the gastric juices and the microbes in the civet digestive system give beans a distinctive acid profile, the researchers wrote in a paper they published in July in the Journal of Agricultural and Food Chemistry.

    The detectable differences were great enough that they could distinguish a 50-50 mix of Kopi Luwak and regular coffee from 100 percent Kopi Luwak, the researchers report.

    This is the first time anyone has been able to determine what chemicals to look for when distinguishing Kopi Luwak from regular coffee, the researchers wrote. Their technique could work alone or in conjunction with the imperfect methods with which experts identify Kopi Luwak now-by its color and smell.


        







    0 0

    Predator Drone Pilot

    Wikimedia Commons

    The Air Force's drone program is too unmanned for its own good.

    While the vast majority of U.S. Air Force pilots still control their aircraft from inside the cockpit, about 8.5 percent are drone pilots who operate their vehicles remotely. That percentage is expected to grow, but there's a problem: the Air Force can't get enough people to volunteer for the training, according to a new report written by Air Force Colonel Bradley Hoagland for the Brookings Institution think tank.

    Here's the challenge: Drones are usually chosen for jobs that are "dirty, dangerous, or dull"-with dull being the key word here. Some surveillance drones require round-the-clock shifts, and the very stressful work is so time intensive that drone pilots often cannot take advantage of additional training and education, which in turn dampens their prospects for career advancement, according to the study.

    Burnout also seems to be a major concern, as drone pilots quit at three times the rate of manned aircraft pilots.

    If the Air Force can figure out how to get more people to sign up for drone training, the problem should self-correct: A larger pool of drone pilots would hopefully mean shorter shifts and more time for career advancement.

    One way to increase the number of drone pilots would be for the Air Force to alter its requirements for pilots. The Air Force only allows commissioned officers to fly drones, and commissioned officers must have a bachelor's degree in addition to technical training. By contrast, the Army allows warrant officers, who only need a high school diploma or GED, to fly both unmanned aircraft and helicopters.

    Or, it just might be that actually flying through the air will always be more awesome that piloting an aircraft from the ground.


        







    0 0

    Controllers Family
    A brief round-up of how video games are destroying your life and also making you a superhuman

    A good rule of thumb in life is that if you enjoy something, it must be terrible for you. Kidding! Kind of. So it goes with video games. Except not, sometimes. But maybe also yes? Or both at the same time?

    Historically speaking, video games are a relatively new cultural phenomenon. So, as you might expect, some of the science on their psychological and physiological effects is murky at best. Popular media organizations (hi!) looking for a common link between kids to explain uncommon behavior might also sensationalize video-game research.

    It's probably safe to say that video games have some good effects on our health, along with some not-so-good effects, but it's not always easy to suss out which are which. Here are some examples of how confusing things get:

    Games can make you smarter.

    Games can make you smarter.

    Games can make you dumber.

    Games can make you worldly.

    Games can make you a better driver.

    Games can improve your reflexes.

    Games don't improve reflexes.

    Certain games are relaxing.

    Lots of games are relaxing.

    Games can cause aggression.

    Games do not cause aggression.

    Games are kind of related to aggression.

    Games make you snack a lot.

    Games can make you fat.

    Games can make you skinny.


        







    0 0

    Fewer Work Hours Doesn't Mean More Happiness

    Dreamstime

    Workaholics of the world, rejoice? We'll all be just as unhappy with a shorter work week.

    When it comes to working hours, less apparently is not more. Proponents of the six-hour workday will be saddened to hear that, as delightful as shorter days sound, decreasing work hours might not make anyone any happier.

    At least that's what new research in the Journal of Happiness Studies suggests. The 10-year longitudinal study examined the impact of the reform South Korea instituted in 2004 reducing working hours on Korean workers' happiness. While people's satisfaction with their working hours increased, there wasn't a significant effect on overall life or job satisfaction.

    While people's satisfaction with their working hours increased, there wasn't a significant effect on overall life or job satisfaction. The Five-Day Working Policy decreased the country's official work week from 44 hours down to 40 hours, and made Saturdays officially non-working days. The policy aimed to combat the low rates of productivity and high rates of on-the-job injury associated with Korea's long work hours, as well as bolster the country's leisure industry. Over the years the study looked at, 1998 to 2008, average working hours declined by 10 percent.

    Partially, the lack of impact on overall happiness could be due to companies reducing the number of hours their employees worked, but not the amount of work they were required to complete. As author Robert Rudolph writes, "many companies responded with increased work intensity and downward adjustments of employee's leave and holidays to fill the gap."

    Women had a greater increase in satisfaction with their work hours, which Rudolph attributes to the conflict Korean women face in balancing work with traditional family duties like childcare and household chores. (The study only examined married or cohabitating couples with children, so how all the single ladies feel we don't know.) One study Rudolph cites found that while men used their newfound free time for leisure and recreation, women largely used it to catch up on housework.

    Rudolph concludes that either long work hours aren't as intimately tied to personal happiness as we thought, or whatever positive effect reducing working hours might provide is just completely obliterated by the increased intensity of companies trying to fit in the same amount of work into fewer hours. And of course, since this study only focused on a specific subset of people--South Korean couples with children--it's possible the results may not extend to everyone in the world.


        







    0 0
  • 08/21/13--13:00: 3 Ways To Save Psychotherapy
  • Psychotherapy Decline

    Olfson & Marcus, 2010 via Brown University News

    People have been turning away from psychotherapy in favor of medication for years, despite the evidence that therapy works. Here are a few tips for improving the industry's image.

    The subtle art of lying on a therapist's couch is in rapid decline. Psychotherapy, the traditional one-on-one weekly session with a therapist, has been on the downswing for years, as more and more psychiatrists and even primary care doctors prescribe psychotropic medications instead of therapy. As the graphic above illustrates, between 1998 and 2007, psychotherapy use for people being treated for mental health conditions in the U.S. decreased from almost 16 percent to 10.5 percent, and therapy use in conjunction with medication went from 40 percent to 32 percent. By contrast, usage rates of medication alone shot up from 41 percent to a little more than 57 percent.

    The decline in popularity of traditional talk therapy isn't because seeing a therapist for depression or anxiety doesn't work. In fact, the evidence points to the fact that it does work, more effectively and for longer stretches of time than psychotropic drugs. Brown University professors Brandon Gaudiano and Ivan Miller, who edited November's Clinical Psychology Review issue devoted to the topic, write in a review that "psychiatric treatment guidelines tend to be biased toward promoting medications and underemphasize the role of psychotherapy as a frontline treatment," even when the research indicates it should be.

    As with most things in healthcare, it often comes down to money. The authors explain:

    there are strong financial disincentives for psychiatrists to provide medications compared with evidence-based psychotherapies. For example, a psychiatrist can bill up to four patients for pharmacotherapy (15 minute medication check visits) compared with one hour-long psychotherapy session. It is not surprising, then, that surveys show that fewer and fewer psychiatrists provide psychotherapy. Also, most patients are being treated by primary care doctors who rarely offer psychosocial treatment alternatives.

    So how will traditional clinical psychology save itself from extinction? Here are a few (semi-serious) ideas to consider:

    1. Go Big

    We've seen how powerful "Big Pharma" can be. Pharmaceutical companies have immense resources to draw on while lobbying for and advertising their drugs. Psychotherapy has no equivalent. "There is no 'Big Psychotherapy' devoted to developing, testing, and promoting psychological interventions as there is with pharmaceuticals," the authors write.

    "There is no Big Psychotherapy, but maybe there should be.But maybe there should be. Someone get fundraising. Because with some Big Money, you could:

    2. Get Yourself A Cute Commercial

    Once psychotherapy advocates have a little bit of a cash cow to play with, they can go on a marketing spree. Big-name depression medications like Zoloft have cute commercials full of sad, adorable anthropomorphic lumps that pop a (brand-name) pill and suddenly start seeing the world in color again. Sitting at home on the couch, we say, "aha, I want that too!" and then rush off to ask our doctors for drugs.

    "The pharmaceutical industry spends upwards of $5 billion per year in the United States on [direct-to-consumer advertising]," Gaudiano and Miller write. The U.S. and New Zealand are the only countries that allow this type of medical advertising. "Research suggests that patients' requests for specific medications increase prescriptions received and can lead to unnecessary or inappropriate treatment."

    Well, it's time to fight fire with fire. Therapy can look fun on TV, too! The American Psychological Association has started a video campaign touting the benefits of psychotherapy. It's a sassy retort to the quick-fix medication mentality. Psychotherapy needs to get itself a marketing campaign, and here's a start.

    3. Drop The Biology Jargon

    Over the years, there's been a push to explain mental health problems in biological terms. To reduce the stigma of illnesses like depression and schizophrenia, mental disorders are cast broadly as diseases in the brain caused by chemical imbalances.

    While it's important to reduce the stigma associated with mental illness, this approach might not actually work at all. In fact, it could be making things worse, as Guadiano and Miller explain:

    The push to medicalize mental health (i.e., promoting a "disease" model) has, not surprisingly, appeared to reinforce the notion of "differentness" which is not effective for reducing stigma and can, in fact, increase it. For example, biological explanations appear to lead to attributions of lack of self-control, dangerousness, and unpredictability, thus producing the desire for increased social distancing.

    Instead, they argue, these campaigns create a perception that mental health disorders can only be treated with intensive treatments like medication. "Talking it out" with a therapist doesn't sound like enough anymore. As another article in this issue of Clinical Psychology Review by psychologist Brett Deacon points out, this mentality has spread even among mental health professionals:

    The 2009 [National Institute of Mental Health] brochure on OCD provides a representative example. Consumers are encouraged to seek help from a doctor who may prescribe antidepressant, antianxiety, and/or beta-blocking medications; doctors may also provide a referral for "talk therapy." Promoting medication as the preferred treatment for OCD and relegating psychotherapy to adjunct status is surprising given that NIMH-sponsored research has shown a form of "talk therapy" known as exposure and response prevention to be more effective than pharmacotherapy in the treatment of adults with this disorder.

    Deacon's advice? Stop searching for a reductionist, purely biological explanation for psychological phenomenon. "It is implausible to expect any one explanation (e.g., neurotransmitter dysregulation, irrational thinking, childhood trauma) to fully account for mental disorders," he writes. "No portion of the biopsychosocial model has a monopoly on the truth."


        







    0 0

    Lab Mouse with Pups

    A chimeric lab mouse with her pups. These were not the mice used in the study below.

    Transgenic Core Facility at the National Institute of Mental Health

    A similar treatment for infertile human couples is still decades off.

    You may have begun as a spark in your parents' eyes, but in the Kyoto University lab of Katsuhiko Hayashi and Mitinori Saitou, baby mice begin life as… skin cells.

    After a decade of work, the two biologists have developed a way to turn mice's skin cells into "pre-egg" and "pre-sperm" cells of sorts, Nature reports. The cells are actually called primordial germ cells, with "germ" here meaning eggs and sperm, not germs like the ones you wash off your hands after going to the bathroom.

    Once implanted in the testes of infertile adult mice, the primordial germ cells turn into sperm that allow the mice to father healthy-seeming pups. When implanted into the ovaries of a female adult mouse, the primordial germ cells become eggs that researchers are able to fertilize in vitro and turn into pups.

    The Nature feature provides an engaging overview of this very cool technology. It's a nice long read. We'll just note one important caveat. The Japanese scientists originally meant to help other scientists, not infertility patients. The primordial germ cells are great for certain genetic experiments, and Hayashi and Saitou figured out a way to make millions of them in the lab. Although their technique sounds provocative, they weren't thinking about infertile couples wishing to have their own biological children.

    However, after receiving about a dozen letters, plus an inquiry from the editor of a magazine for gay and lesbian readers, the Japanese lab is preparing to study whether this could work in humans. It has a grant to study the technique in monkeys.

    It will be a long road. Hayashi told Nature that preparing for human studies will be much like starting over from scratch, and that an infertility treatment would not be ready for decades.

    [Nature]


        







    0 0

    Kurchatov city, the center of the Semipalatinsk nuclear test site

    Wikimedia Commons

    Siegfried Hecker spent more than a decade securing a nuclear test site the size of New Jersey.

    When the Soviet Union broke apart at the end of the Cold War, several of its military and science facilities fell into disrepair. One of them, the Semipalatinsk Test Site, just happened to be a nuclear test site the size of New Jersey and filled with leftover nuclear material that could potentially be made into weapons. Abandoned in what is today Kazakhstan, the test site is much less of a danger to the world, thanks to the quiet work of Russian, Kazakh, and American scientists over more than a decade, which as the Times reported over the weekend, is revealed in a new report published this month.

    Siegfried Hecker was crucial to the American part of that equation. A former director of Los Alamos National Laboratory, he became interested in Semipatalinsk after Kairat Kadyrzhanov, director of the Kazakh Institute of Nuclear Physics, visited Los Alamos. Kadyrzhanov spoke of the site's problems, including radioactive hot spots and copper thieves. Here Hecker describes the scene on the ground when he arrived in 1998.

    I was alarmed to find unmanned guard posts and virtually no security at the site. My Los Alamos colleagues and I became convinced that Semipalatinsk was not only a serious proliferation problem, but also an urgent one. The copper cable thieves were not nomads on camelback, but instead they employed industrial excavation machinery and left kilometers of deep trenches digging out everything they could sell. We were concerned that some of that copper cabling could lead to plutonium residues.

    Following that 1998 visit, Hecker worked with Washington, Moscow, and Astana to clean up the site. Stanford has an excellent interview with Hecker, who is now a research professor and senior fellow at Stanford's Center for International Security and Cooperation. It covers everything from how they secured the site to which nuclear fears still keep Hecker up at night.

    [Center for International Security and Cooperation.]


        







    0 0

    Helium Supply

    Katie Peek

    Making the most of a limited resource

    In a free market, supply follows demand. But in the helium market, regulations set price and production, and with good reason. Helium gas-essential for MRIs, rockets, and space telescopes-is a limited resource. Radioactive elements in Earth's crust emit helium, which gets trapped in natural-gas fields, and we then extract it-or let it escape as we burn the gas. Until recently, the U.S. made most of the world's refined helium. Now it's a global game. Here's a history of the price, world production, and rising power of helium.

    This article originally appeared in the August 2013 issue of Popular Science. See more stories from the magazine here.


        







    0 0

    Math Skills

    Fir0002/Flagstaffotos via Wikimedia Commons

    Differences in cognitive organization turns some people with autism into math whizzes.

    Though children with autism spectrum disorder often have difficulty with social interaction and communication, they're also much more likely to be savants--to show extraordinary skill in a particular area, like music, art or math.

    Not everyone with autism happens to be a math whiz, but a new study suggests that autistic kids who do show above-average math skills do so because they have slightly different brain organization than children who don't have autism.

    The autistic children used a different part of the brain to solve problems.Researchers at the Stanford University Medical School recruited 36 children between 7 and 12 years old, half of whom had been diagnosed as autistic. All the participants had normal IQs and normal reading skills, but the autistic group did significantly better on the standardized math tests they took as a precursor to the study. When the researchers interviewed the children about how they solved math problems, the autistic children were more likely than the other participants to say they had analyzed and broken up the problem into smaller components, rather than relying on memory or counting on their fingers.

    The autistic children weren't just better at math compared with the non-autistic children, they used a different part of the brain to solve problems. When solving math problems in an MRI scanner, their ventral temporal occipital cortex--the region of the brain normally involved in recognizing faces and objects--lit up, indicating that there's an entirely different pattern of functional brain organization among children with autism.

    "Our findings suggest that altered patterns of brain organization in areas typically devoted to face processing may underlie the ability of children with autism to develop specialized skills in numerical problem solving," lead author Teresa luculano said in a press statement.

    This study was relatively small in size, but Iuculano and her colleagues are now studying differences in mathematical ability among a larger group of autistic children.

    The work appears in Biological Psychiatry.

    [Stanford University News]


        







    0 0

    T. Rex Foot

    Ballista

    Hard to know, says Will Harcourt- Smith, an expert on early-human fossils at the American Museum of Natural History in New York. "Some infections leave their mark on bones. Athlete's foot is not one of those infections. But if we make some logical assumptions, we might be able to make a good guess."

    Athlete's foot is a fungal infection of the skin-typically by fungi of the Trichophyton genus-that causes skin to scale, flake, and itch. Which makes us ask: Did cavemen even encounter this fungus? "The fungus that causes athlete's foot was definitely around back then, and probably much earlier," says Tim James, who specializes in fungi evolution at the University of Michigan. "Like all fungi, it thrives in moist, unhygienic environments, which is why most people pick it up in locker rooms. I don't imagine that a caveman's dwelling, with a dirt floor covered in animal remains, was a very sterile place."

    Cavemen would have had to have worn shoes.But just walking around in fungus doesn't cause athlete's foot. Cavemen would have had to have worn shoes. "It turns out that athlete's foot is a disease of shod populations," says Bob Neinast, the lead blogger for the Society of Barefoot Living. "Anyone can pick up the fungus, but the thing to keep in mind is that it grows really well in a warm, dark, moist environment. That's the inside of a shoe." People who go barefoot, Neinast says, rarely get athlete's foot, most likely because exposure to fresh air keeps their feet too dry for the fungus to take hold and multiply.

    Which leads us to ask: Did cavemen go barefoot? "Within around 10,000 years ago, people had lovely shoes," Harcourt-Smith says. Our ancestors might have moved out of caves and into small villages by that time, he notes, but their footwear was still quite primitive, consisting of leather wrappings sometimes stuffed with grass for insulation (at least during cold weather). "If the shoes got damp and the person wore them often enough, that could have encouraged athlete's foot," he says.

    Even the worst case of athlete's foot wouldn't have killed a caveman, but it could have impaired his quality of life. "If the irritation gets bad enough, it will stop you in your tracks," says Cody Lundin, an outdoor survival- skills instructor who has gone barefoot for 20 years. "That would be unacceptable for a hunter population." Without antifungal sprays or creams, how would they have fought the burn? They might have been able to cook up a remedy. "If you take the green parts of a juniper plant and boil them, the mix makes a wonderful fungicide that will work on athlete's foot. Indigenous people might have used it," Lundin says. "Works great on jock itch, too."

    This article originally appeared in the December 2010 issue of Popular Science magazine.


        







    0 0

    Gatling Gun Patent From 1865

    National Archives

    Patented in 1862, the Gatling gun heralded a new age of violence.

    Yesterday in 1866, the U.S. Army adopted machine guns for the first time. Or, well, almost machine guns: the Gatling gun, first patented in 1862, wasn't fully mechanical. Someone still had to crank it by hand make the gun fire. In an era of warfare remembered for muskets and bayonets, the Gatling gun was a terrifying leap forward.

    How it worked: The shooter turns a crank, which pushes a firing pin into a loaded, ready barrel at the top of the gun. When the firing pin sets off the bullet, the next barrel moves into place and the just-fired barrel moves toward the ground, where it drops the spent bullet casing. Then, the turning motion brings the barrel up to a hopper of ammunition, where the next shot is loaded into the barrel. A firing pin then connects with the bullet in the newly loaded barrel, and the gun fires. This happens for all six barrels, as fast as the crank can be turned-up to 200 shots per minute. (Here's a helpful animation of this process.)

    Pictured above is the patent for an improved version of the gun made in 1865. This is the version first officially adopted by the U.S. Army, though technically not the first used. That distinction goes to the 1862 model. During the Civil War, Major General Benjamin Butler purchased a dozen for his forces and used them fighting near Petersburg, Va.

    While Gatling guns didn't change the outcome of the Civil War (unlike some technology that certainly could have), they played a major part in the colonial wars of the next 50 years. During that time, the Maxim gun appeared, a true machine gun that replaced Gatling guns by the start of World War I.

    Musing on the Maxim gun's role in colonial wars, British poet Hilaire Belloc quipped "Whatever happens, we have got / The Maxim gun, and they have not." The bloodiness of WWI, with tens of thousands dying in single days while charging machine guns, is in no small part a result of this evolution in weaponry.


        







    0 0

    Makerbot Digitizer

    Makerbot

    We first spied the Makerbot Digitizer, a standalone 3-D scanner, back at South by Southwest--it's a rotating platform with a depth sensor, so anything you plop onto the platform can be scanned and then reproduced, like a sci-fi replicator. And now it's officially available for pre-order, for $1,400.

    The Digitizer, and 3-D scanning in general, is a key development in the adoption of 3-D printing. If you have a 3-D printer, the eternal question of "what do I do with it?" can only be answered by downloading models from resources like Thingiverse. But if you want something that's not there--and there are lots of things that aren't there--you're pretty much out of luck. 3-D scanners change that; you can make printable files out of any appropriately sized object you can find. It's pretty amazing stuff! Makerbot's site appears to be getting hammered at the moment, but once it's back up, you can pre-order the Digitizer here.


        







    0 0

    It was only a matter of time, really

    This is one of those ideas it's surprising we didn't hear about earlier: a Google Glass-controlled drone! (Come on, guys, Google Glass has been around for, what, four months?) But then again, if this coding narrative from Glass developer Blaine Bublitz is any indication, it took a bit of pro hacking to make possible.

    Bublitz started with the gesture control system Leap Motion, running it through the Arduino-based robot Shieldbot. After experimenting with that, he tried a similar strategy with Glass, loading his Glass set with an app that can transmit head-movements into pitch and roll directions that can be sent to a quadcopter drone. That left Bublitz with the Glass-controlled 'bot you see above.

    I'm simplifying Bublitz's description, but if you're technically minded, by all means check out out his full description. Then decide how it stacks up to this Oculus Rift-piloted drone.

    [Iced Blog via PSFK]


        







    0 0

    Brainy Vole

    Daderot via Wikimedia Commons

    A study finds urban mammals have significantly larger brains than their rural cousins.

    Humans aren't just changing the climate, causing earthquakes and you know, generally adapting the world to suit our whims. Human activity might also be driving increased brain size in animals! Woo, evolution!

    In a study published yesterday, University of Minnesota biologist Emilie Snell-Rood examined a collection of mammal skulls from 10 different species to see how animal brains might be adapting to more urban environments. She found that some species are developing bigger brains in response to human disruption to their environment, both in urban and in rural communities in Minnesota.

    By Snell-Rood's estimates (using cranial capacity to approximate brain size), urban-dwelling white-footed mice and meadow voles have brains that are 6 percent larger than their rural counterparts. She and her co-author also found an increase in brain size among two species of shrews and two species of bats collected from rural areas of Minnesota.

    Some previous studies have found better learning and cognitive ability to be correlated with bigger brain size, though the link is disputed. Snell-Rood concludes that bigger brains and the accompanying increase in cognition could have helped animals adapt to the new challenges of finding food and surviving in human-dominated cities rather than the forests and prairies that once covered Minnesota. The current study suggests a follow-up exploring the question of brain size and urban vs. rural environments in live animals, since these results only show a correlation between the two, not a causal relationship.

    The study is published in Proceedings of the Royal Society B.

    [The New York Times]


        







    0 0

    NIST Boulder's Ytterbium Atomic Clock

    Burrus/NIST

    And they're about 10 billion times more precise than your quartz wristwatch.

    At the foot of the mountains abutting Boulder, Colorado, in the labs of the National Institute of Science and Technology, there sits the most precise clock in the world.

    It's so precise that, because it outstrips other atomic clocks, its creators weren't able to measure its precision until recently, when they built a second version of it. Now, with the two available to compare with one another, they've come up with a number for the clocks' precision, which clock physicists call the clocks' stability.

    "Clock stability is a term we use in the field that basically refers to-if you look at the ticking rate of the clock, how much does that ticking rate change over time?" Andrew Ludlow, a NIST Boulder physicist who works on improving the lab's atomic clocks, tells Popular Science. "Ideally, you want every tick to be exactly the same as the other."

    The NIST Boulder clocks have an instability of one part in 10-18. That's about 100 times more stable than the best cesium atomic clocks that international governments use to define the perfect second. And it's about 10 billion times more stable than quartz wristwatches.

    "These clocks can very carefully measure gravitational field," Ludlow says.

    The NIST Boulder clocks are made with technology a generation beyond that used in cesium atomic clocks. They happen to use atoms of ytterbium, a rare Earth element, but other next-generation clocks around the world use other elements, such as strontium and mercury.

    These next-generation clocks could be used to measure some pretty cool effects in fundamental physics. For example, Einstein's theory of relativity has been devilishly difficult to prove experimentally. A NASA satellite measured the warping of space and time around Earth just in 2011. Next-generation atomic clocks, however, could measure the effects of relativity right here on Earth.

    The theory of relativity predicts that in a strong gravitational field, time should slow. Clocks such as NIST Boulder's should be able to detect that slowing… and whether that slowing is different on different places on Earth. "These clocks can very carefully measure gravitational field," Ludlow says. "It allows you to map out the gravitational field in an area."

    Ludlow's clocks are too large and fragile to move anywhere for experiments, however, so they're not often put to work to actually measure anything. The U.S. government has given NIST money to work on making more robust, portable atomic clocks for experiments, Ludlow says. More portable atomic clocks could also go into space, to perform physics experiments there.

    The NIST clocks are optical lattice clocks, which means they have an intense laser field that holds about 10,000 ytterbium atoms in place. Another laser excites the atoms, the movement of which is how the clock measures time. Exciting the atoms with a laser makes them vibrate at higher frequencies than atoms in cesium atomic clocks do. So optical lattice clocks tick faster and are able to tick off more precise units of time. Having so many atoms in the clock helps average out the uncertainties from any one atom.

    So the NIST ytterbium optical lattice clocks are the most precise in the world, a record Ludlow and his colleagues published today, in a paper in the journal Science. What about accuracy, or the clocks' measure of time against the true time? To measure a clock's accuracy, scientists try to measure all of the things in the world that could alter the clock's atoms, such as changing temperatures or the laser lattice's effects on atoms.

    The last time Ludlow and his team did this for the ytterbium clock was in 2009, when they found it was as accurate as a cesium atomic clock. They are now working on measuring accuracy again. As for the most accurate atomic clock in the world, it's also located in NIST Boulder and is called an aluminum quantum logic ion clock.


        







    0 0

    Volumental Scanning

    Volumental

    Wave a sensor over an object and reproduce it, in the simplest setup we've seen to date.

    3-D scanning is a technology with all the right pieces floating around, yet to be assembled into the final piece that's easy for everyone to use. The idea: using a small gadget, you can scan in a 3-D object, then reproduce it with an increasingly affordable 3-D printer. It's like a photocopier for objects. Cool! But pretty wonky to do, at the moment. That wonkiness is what Volumental is trying to solve.

    Volumental is a simple web app that aims to handle all of that previously manual stuff automatically. The process goes like this: you take a depth scanner, like a Microsoft Kinect, plugged into your laptop. Pointing the depth sensor at the object to be scanned, you walk all around it, shooting it from all angles. That data is turned into a .obj file. So far, Volumental has already created and released all of these tools; you can use them here, right now.

    But the next step is pretty tricky. Depth sensors don't create perfect scans of objects; the data they create isn't printable immediately. They have to be cleaned up, their structure mapped as a mesh, their edges "watertight." Watertight means no gaps in the design: there can't be any holes anywhere, any gaps in the data, or else your object won't be printed properly. You can do that kind of thing with free software like MeshLab, but it's pretty complex and difficult to use. Imagine if you had to adjust a hundred settings every time you made a photocopy. You'd never use it!

    Volumental, which just achieved its funding goal on Kickstarter, aims to use the funding to create a one-click, automatic clean-up service. "We won't be providing a full manual toolbox like Meshlab," says Volumental team member Ernest Ang. "Instead, the funding will help develop automatic mesh clean-ups during the reconstruction phase of the 3-D scans." And anything that makes 3-D scanning as easy as pressing the big green button on a photocopy machine sounds good to us.

    There are still ten days left on the Kickstarter; if the project keeps earning, Volumental will be able to add bonus features, like Instagram-like filters (sepia, black-and-white) and pre-made templates (like keychains and phone cases).


        







    0 0

    Splish splash, stopping fire in its path

    This specially configured firefighting 747 supertanker can dump 20,500 gallons of water at once on a fire, before landing to refuel and reload. Helicopters can't carry as much (they drop anywhere from 100 to 3,000 gallons of water in a single load), but they are great a quickly grabbing refills, usually from nearby lakes or rivers. Or, in the case of this helicopter in Portugal, from a nearby backyard swimming pool.

    Props to the pilot.

    [Jalopnik]


        







older | 1 | .... | 172 | 173 | (Page 174) | 175 | 176 | .... | 732 | newer