Quantcast
Channel: Popular Science | RSS
Viewing all 20161 articles
Browse latest View live

You Can Finally Play 'Rock, Paper, Scissors' in Virtual Reality

$
0
0

In virtual reality you’ll soon be able to fight intergalactic battles, explore the inside of the human brain, and now you can take it back to grade school with a simple game of rock, paper, scissors.

Leap Motion, the gesture-control device manufacturer, has opened up a game of rock, paper, scissors on its developer portal. The game was created by third party developers a&g Labs and a Leap Developer by the username gward, and ended up winning Leap Motion’s VR Jam competition, a contest designed to spur the development for virtual reality apps. The game can detect hand configurations, and reject inputs if they don’t fit into an accepted rock, paper, or scissor formation. (Sorry, lizard and Spock.) The developers do give a nod to Fight Club, however; the game is set in a Durden-esque basement, with a bar of pink soap prominently displayed.

The game is available of Windows and OS X, but you’ll specifically need to use a Oculus Rift and Leap Motion controller. If you want to practice up beforehand, thankfully the New York Times has come through and made a Rock, Paper, Scissors simulator.


Japan Fires The World's Most Powerful Laser

$
0
0

The Asahi Shimbun

The LFEX petawatt laser.

Researchers at Osaka University are claiming to have fired the most powerful laser in the world. The 2-petawatt (two quadrillion watt) pulse lasted just one picosecond (a trillionth of a second).

For a rough comparison, in 2013, a 50 kilowatt (50,000 watt) laser shot down a drone two kilometers away.

Osaka's mega-powerful laser is called LFEX, or Laser for Fast Ignition Experiments, and measures more than 300 feet long.

While two petawatts is a formidable amount of power, the idea of a petawatt laser isn’t new. The United States has a few of their own, notably a one-petawatt laser at the University of Texas at Austin.

Michael Donovan, associate director for the Texas Petawatt, says that it’s important to remember when talking about lasers of this size that, while the power output is immense, the energy used is actually very little.

“The energy of the Texas Petawatt, 150 to 200 Joules, is about that in a cup of coffee or a very hard tennis serve,” Donovan said via email. ”It is the energy used by a 100 watt light bulb in 2 seconds.” Power is energy over time, and since one picosecond is a very small amount of time, the power output turns out to be immense.

The scientists at Osaka University claim that their pulse (2 petawatts at 1 picosecond) is about 100 times the energy of UT Austin’s laser, and twice its peak power.

“Two petawatts, that’s a lot,” said Julio Soares, senior research scientist at University of Illinois at Urbana-Champaign. When asked what a laser of that power could be used for, Soares responded, “Well, to blow things up.”

We don’t have any footage from the University of Osaka, which is now working on a 10-petawatt laser, but you can check out the Texas Petawatt laser in this video.

Also, it wouldn't be a story about a huge laser if we didn't mention the Death Star blowing up Alderaan. If you want to compare this laser's output to some other calculations, we've rounded up some of the best articles on whether the Death Star really had the juice to blow up a planet.

New Dissolving Ring Delivers Drugs Through Your Stomach For Seven Days

$
0
0

Shiyi Zhang, the paper's lead author, holds up a prototype of the device.

In recent years researchers have been looking for more efficient ways to deliver medicine over an extended period of time. To prevent drastic changes in hormone levels, for example, or make sure people don’t stop taking medication too early, biomedical specialists have looked at surgically placed implants or new chemical configurations for the drugs in question. Now researchers from MIT have created another alternative: they've developed a new ring-like device made of a polymer that can deliver drugs to the stomach over the course of a week without putting the patient at risk, according to a study published yesterday in Nature Materials.

Swallowable, extended-release devices for your stomach have a number of qualities that make them difficult to engineer. They need to be stretchy and flexible so that they can be folded up into a pill, swallowed, then expand in the stomach and still work. If they’re too big, they run a greater risk of breaking. If they’re too small, the devices can put the patient at risk of an intestinal blockage—a life-threatening condition that requires immediate surgery, which can happen if the device slips through the pylorus, the hole connecting the stomach to the intestine (just 1.5 to 2 millimeters in diameter).

But MIT researchers created a material that can satisfy all these requirements. Made of nontoxic, degradable polyester gel and treated to be flexible, the ring-shaped device unfurls in the stomach minutes after being swallowed to a diameter larger than the pylorus. But it has another important quality as well: the device is activated by pH. It remains solid in the acidic conditions of the stomach, but dissolves if it starts to enter the neutral pH of the intestine. When the researchers tested their prototypes in pigs, the device expanded within 15 minutes and stayed in the stomach for seven days before dissolving.

MIT is negotiating an agreement with biotechnology company Lyndra to bring the devices to the market, releasing drugs over weeks or even months. But the researchers anticipate that the new material could be useful for other medical applications, like in bariatric surgery to treat obesity or to create ingestible electronics to diagnose and monitor conditions in the gastrointestinal tract.

Terminator-Like Vision Could Help Robots Do Our Dishes

$
0
0

If the above gif looks familiar it’s probably because it looks eerily similar to this:

This, of course, is how the T-800 Terminator sees and recognizes objects in the world upon arrival from the future in Terminator 2: Judgement Day.

Similar to the movie, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have created an object recognition system that can accurately identify objects using a normal RGB camera (no threatening blood-red color filter required). This system could help future robots interact with objects more efficiently while they navigate our complex world.

“Ideally we want robots to be cleaning our dishes at some point in the future. We want recognition systems where it does in fact see the objects that the robot should care about and manipulate them,” says Sudeep Pillai, lead author of the study.

The system builds upon classical recognition systems as well as another system called “simultaneous localization and mapping” (SLAM), which allows devices like autonomous vehicles or robots to have a three-dimensional spatial awareness. The team’s new “SLAM-aware” system maps out its environment while it collects information about objects from multiple viewpoints. With each new angle, the program is able to predict what the objects are by breaking them down into their more basic components. It then compares this compiled description to a database of existing descriptions of objects. For example, if the SLAM-aware system sees a chair, it may break it down as a seat, four legs, and a back.

Since the SLAM-aware system creates a three-dimensional map of what it is seeing, it can also better pull out one object from the other. Each new viewpoint adds to the descriptive information about each object. This system decreases ambiguity and increases the likelihood of classifying objects correctly and distinguishing one object from the other.

SLAM-aware Object Recognition System

MIT

The SLAM-aware system differs from classical image recognition systems that the team calls “SLAM-oblivious,” as these do not create a three-dimensional map and can only detect objects one still-frame at a time. By comparison, SLAM-oblivious systems have far greater difficulty recognizing multiple objects in a cluttered environment than the SLAM-aware system. In the gif below, incorrect predictions flash red.

In one experiment, the SLAM-aware system was able to correctly identify a scene of objects with 86.1 percent accuracy, which is comparable to advanced special-purpose systems that can factor in depth information with infrared light. Although these special-purpose systems can be very accurate, as high as 92.8 percent accuracy, it comes at the cost of time. Some of these systems had a run time of about 4 seconds, where as the SLAM-aware system had a run-time of 1.6 seconds. Systems that use infrared light also have trouble working outside because of the difficult lighting conditions.

“The fact that you cannot use it outdoors makes it kind of impractical from a robotics standpoint, because you want these systems to work indoors and outdoors,” says Pillai.

In the future, Pillai and his team want to build upon their system to also solve a classic robotics challenge called a “loop closure.” This is when robots are not able to recognize locations that they have previously been, which is important for navigating and interacting with the world. The SLAM-aware system may begin to solve this problem by allowing robots to recognize specific objects in different locations and classify them as more important to that particular location.

The researchers presented their study this month at the Robotics Science and Systems conference in Rome, Italy.

“We don’t want to compete against competing recognition systems, we want to be able to integrate them in a nice manner,” Pillai says.

No word on whether the SLAM-aware system makes it any easier to locate and protect John Connor.

Space: Not Just For Rocket Scientists Any More

$
0
0

Photo Credit: NASA on The Commons

Projects: Spacehack, SpaceGAMBIT, NASA Open Source Software, NASA's Data Portal

For a long time now, space exploration has been the preserve of a tiny group of highly specialized and highly trained people, funded almost exclusively by public sector organizations. This is in large part due to the fact that space exploration has been prohibitively expensive, but it is also, according to innovators like Burt Rutan and Elon Musk, because politics and bureaucracy have stifled the innovations that would see costs come down.

That's all starting to change. With several related movements -- like open source, maker, and citizen science -- gaining momentum and converging, new possibilities are opening up. Here are a few for you to explore:

Spacehack bills itself as "a directory of ways to participate in space exploration" and the projects listed all have a citizen science angle to them. They are broken into five categories, such as data analysis, distributed computing, education, open source, and competition. For instance, one of the projects discussed on this site is Galaxy Zoo Radio, designed to help astronomers discover supermassive black holes.

SpaceGAMBIT is a funding organization that seeks to "tackle everything holding us back from being a spacefaring species." Previously funded projects include Open Bioreactor, an open-source personal desktop bioreactor; the Space Hacker Workshop, which connects citizen scientists to experiment flight opportunities; and the Asteroid Response Center, an interactive installation designed to explain NASA’s Asteroid Grand Challenge. If you have a worthy project, you might consider applying for funding here.

At the NASA Open Source Software site, developers and coders can pick up copies of software related to space exploration. For instance, there's a mission simulation toolkit, and a program called JavaGenes that helps to evolve more efficient software.

Finally, there is NASA's Data Portal. As the name suggests, it's a catalog of publically available datasets that you can download, search, parse, explore, and otherwise mash up with other data. You can grab things like the Lunar Orbiter Photo Gallery, an extensive collection of over 2,600 photographs produced by all the Lunar Orbiter missions. Or you could download a 3D model of Canadarm, the robotic arm that has been critical to many shuttle missions.

So now you have all the sites you need to spend a little less time watching Star Trek reruns, and a bit more time actually... making it so.

Chandra Clarke is a Webby Honoree-winning blogger, a successful entrepreneur, and an author. Her book Be the Change: Saving the World with Citizen Science is available at Amazon. You can connect with her on Twitter @chandraclarke.

Google Wants To Help You Avoid Long Lines

$
0
0

It’s lunch time and you’re hungry. You can now check Google to not only find food near you, but also how busy each restaurant is.

Google today announced a new feature to its business listings that shows what peak times are at each business. Google doesn't say how the data was generated, but it’s likely they’re using location data pulled from Android phones to determine how many people are gathered at a certain time. (Google protip: Don’t go to the gym after work Monday.)

Google has been long criticized for its use of user data, especially how it tracks and uses location information. In 2009, Google wrote a blog post called “The bright side of sitting in traffic,” which explains how Android phones are able to calculate road congestion based on anonymized information sent from those idling on the freeway.

People hate waiting in lines, notoriously at the airport and other urban environments. Google CEO Larry Page has already expressed interest in building better airports, and this new use of data indicates opportunity for innovation there.

For now, though, we’ll have to settle for the coffee shop and the gym.

This Is Why Virgin Galactic's SpaceShipTwo Crashed, According To The NTSB

$
0
0

NTSB Chairman Hart and investigators inspect SpaceShipTwo crash site

NTSB Chairman Hart and investigators inspect SpaceShipTwo crash site

At a public meeting in Washington, D.C., this morning, the National Transportation Safety Board revealed that a combination of human error and inadequate safety measures caused the breakup and crash of Virgin Galactic’s SpaceShipTwo during a test flight last October 31, killing co-pilot Michael Alsbury and seriously injuring pilot Peter Siebold.

The accident was a tragic loss of life and setback in the nascent space-tourism movement, perhaps the worst among a year of many private spacefaring setbacks. The NTSB investigation —which combined interviews, data analysis, and video telemetry from inside the cockpit— confirmed what many suspected: Alsbury prematurely unlocked the SpaceShipTwo's so-called "feathering mechanism."

This system, in which the rear tail assembly pivots upward to slow and stabilize the suborbital spacecraft during its descent phase, was meant to be unlocked at Mach 1.4, but Alsbury released it at a lower altitude and speed, Mach 0.92. Because this happened during the full-power climb rather than at apogee—the highest point in the vehicle’s trajectory—intense aerodynamic pressure caused the feather to overwhelm its own motors, fully deploy, and then collapse, resulting in the craft’s breakup. After which, Siebold’s parachute activated. He survived, but with serious injuries.

When the original SpaceShipOne was revealed in 2003, designer Burt Rutan, the now-retired founder of aviation innovator Scaled Composites, described the daring, unconventional re-entry system as a “shuttlecock” configuration meant to create a “carefree” re-entry.

SpaceShipTwo

NASA

As envisioned, a mothership aircraft, White Knight, carries the spacecraft to approximately 50,000 feet, where it’s released. The crews ignite a rocket motor that propels the ship and its 8 occupants—6 of whom are paying customers who’ve shelled out $250,000 for the thrill ride—at supersonic speeds to an altitude of 68 miles, above the border of space. Once there, they’ll experience several minutes of weightlessness and tremendous views before the ship feathers the tail, re-enters the atmosphere, and glides to a landing back at its original airport.

But describing it as a “carefree” entry is misleading. The spacecraft is a complex system—intended to operate as a rocket, spaceship, and glider—that relies on precision timing and deft control inputs from its crew. Furthermore, as early test flights have revealed, the ride both under rocket power and during re-entry is an intense, borderline violent experience. There are plenty of opportunities for things to go sideways.

In the case of the October 2014 crash, though the actual deployment of the feather required both pilots’ participation (via a pair of levers), it could be unlocked by just one of them. The NTSB speculated that the Alsbury might have done this to avoid an abort later, if he was worried about executing the steps in the 26 seconds they had to do it prior to an abort being called. The investigation also determined that Alsbury had no previous experience with the vehicle’s behavior during powered flight, in particular its vibration and loading. This could have affected his judgment and reactions. In a statement released on YouTube, Virgin Galactic CEO Richard Branson said that his company's engineers had "already designed a mechanism to prevent the feather from being unlocked at the wrong time," and added that Virgin Galactic would "continue to prepare and train" its pilots corps. Yet he maintained the NTSB investigation provided the company "a clean bill of health."

While that may cover the what, the true root causes—the how and why—are the larger and in some ways more important questions, particularly given that commercial space exploration is a new endeavor with many inherent risks. Indeed, the NTSB noted that Virgin Galactic and its partner, Scaled Composites, had inadequate safety mechanisms in place to prevent a single-point failure such as this. The vehicle was not designed with safety mechanisms in place to prevent premature unlocking or movement of the feather, the training system for the crews did not explicitly warn about the risks, and the simulator training did not go far enough to replicate actual flight conditions. For its part, Virgin Galactic has already implemented many changes recommended by the NTSB in its second space vehicle nearing completion now in Mojave, California.

But the NTSB also called into question the nature of the Federal Aviation Administration’s oversight of commercial space travel, suggesting that its role as both a regulator and booster could be problematic.

“The FAA’s oversight role in commercial space is different from its oversight role in aviation,” noted NTSB Chairman Christopher Hart. “For commercial space, the FAA does not certify the vehicle. It only certifies the launch, focusing mainly on public safety. Nonetheless, many of the safety issues arose not from the novelty of a space launch test-flight, but from human factors that were already known elsewhere in transportation. We need to ask whether the FAA’s procedures and oversight were effective, and whether they can be improved upon.”

Investigators noted that the FAA should examine its systems for issuing launch permits, and the process by which it grants waivers from human factors and software hazard analysis requirements. In short, it needs to know far more about the vehicles being flown and what the risks are with each type of flight.

The implication, of course, is that commercial spaceflight might be proceeding too aggressively, and that adequate safety, communication, and regulatory systems are not fully in place. As a result, a simple, preventable error by an otherwise skilled and experienced pilot caused a fatal crash and the destruction of a high-profile, passenger-carrying spacecraft. “These two test pilots took on an uncommon challenge: testing technologies for manned commercial space flight, which is still in its infancy,” Hart said. “Human space flight is subject to unique hazards, and test-pilots work in an environment in which unknown hazards might emerge. We cannot undo what happened, but it is our hope that through this investigation we will find ways to prevent such an accident from happening again.”

Nintendo Wants To Turn Sleep Into A Game

$
0
0

Drawing filed with Nintendo's patent for Quality of Life

From the first narrative videogame to those that move games into 3D space, Nintendo has had a prescient, uncanny ability to understand our relationship with technology. Its most recent initiative is called Quality of Life, but no one has quite known what to expect. This week, Nintendo fans have unearthed a patent that the company filed for its newest device used to monitor sleep, according to Forbes.

There aren’t a lot of details about this Quality of Life initiative, but based on the information that has trickled out over the past year, it’s seems to be centered on humans and their sleep. “Fatigue and sleep are themes that are rather hard to visualize in more objective ways,” the company’s late president, Saturo Iwata, said at a press conference in October last year. “At Nintendo, we believe that if we could visualize them, there would be great potential for many people regardless of age, gender, language, or culture.”

Based on the abstract in the patent, it appears the device will be able to do just that. It looks like an iPhone docked in a terminal, and the terminal is what would be sensing information as the user slept next to it--detecting the person’s sound via a microphone and movements using radio waves. The software would then analyze the data to determine the user’s fatigue level and sleep quality. The ceiling projector displays the results and suggests ways for the user to improve her sleep habits.

Drawing filed with Nintendo's patent for Quality of Life

Nintendo is partnering with the medical device company Resmed to produce the gadgets, noted in a piece published last year in Forbes, so there’s reason to believe that its sleep recommendations and analysis will be based in some science.

It’s important to note that the final device that goes on sale might look nothing like the patent sketches, but the general idea probably will. So far, Nintendo has not announced when the Quality of Life device might hit the market.


Ceres’ Mountains And Craters Now Have Names

$
0
0

Topographic Map Of Ceres

NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

Brown spots represent the highest elevations, while indigo represents the lowest parts.

Pluto may have won our hearts this month, but it’s not the only dwarf planet in the solar system. NASA is also exploring Ceres, the largest rock in the Asteroid Belt between Mars and Jupiter, and today the Dawn mission released a brand new map of the small world’s craters and mountains.

Ceres has a similar history to the object that was formerly known as our ninth planet. When it was discovered in 1801, scientists thought Ceres was a planet. And they considered it a planet for about 50 years, until they resolved that it’s merely one big rock in the midst of a belt of asteroids. Both worlds orbit the sun and are large enough to be round, but they have too many neighbors to be considered planets. They are therefore classified as dwarf planets—although Ceres is also classified as a large asteroid.

Luckily, 2015 is the (completely unofficial) Year Of The Dwarf Planets. In March, the Dawn spacecraft went into orbit around Ceres and started sending back some pretty incredible imagery, and revealing some mind-boggling, shiny spots of unknown origin.

Ceres' Bright Spots

NASA

They're watching you...

In new maps just released today, the crater that contains those mysterious spots has officially been named. “Occator” is 60 miles wide and 2 miles deep, and it’s named after the Roman agricultural deity of harrowing. (Though “harrowing” here refers to a method of leveling soil, its other meaning, “disturbing”, also seems fitting.)

All of the features on the dwarf planet’s surface have names that suit Ceres’ namesake, the Roman goddess of agriculture. Other names include Haulani, after the Hawaiian plant goddess, and Dantu, named after the Ghanaian god associated with planting corn. The names have been approved by the International Astronomical Union.

The maps show that Ceres has a diverse topology, with an elevation of about 9 miles separating its highest point from its lowest. Its crust appears to be ice-rich.

Higher resolution maps will come as as Dawn moves from its orbit of 2,700 miles to an altitude of 910 miles in August.

Ceres Gets Some Names

NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

Ceres is named after the Roman goddess of agriculture, so the names of Ceres' craters and peaks come from agricultural mythologies from around the world.

Now The Blind Can Read Texts On This New Braille Smartwatch

$
0
0

Dot wearable

Touchscreens are not conducive to the blind as they cannot see the shifting pixels on the smooth device. That has not only slowed down the technological literacy for the blind, but has also impaired their reading literacy, cutting them off from most information that isn't published in print. Some tech companies have found workarounds, like having Siri read texts or creating braille e-readers, but they are often clunky and expensive.

A South Korean startup company may have finally found a solution. They created Dot, the first braille smartwatch, complete with shifting cells of dots. This inexpensive gadget could help the blind catch up to the age of smartwatches, the sales of which have increased 475 percent in the last year thanks to the Apple Watch. But it could also be used as an educational tool.

“Until now, if you got a message on iOS from your girlfriend, for example, you had to listen to Siri read it to you in that voice, which is impersonal,” Dot CEO Eric Ju Yoon Kim told Tech in Asia. “Wouldn’t you rather read it yourself and hear your girlfriend’s voice saying it in your head?”

The Dot wearable looks like a cross between a Fitbit and a Pebble Time, Alphr notes. On its face, it has four cells each with six active dots, which can raise or lower to make four braille letters at a time. It links up with Bluetooth to convert text from apps like iMessage into their braille letter equivalents with the user’s voice commands. The device can last for five days without stopping to charge.

Dot wearable

Dot wearable

One key feature of the Dot wearable is its cost. Unlike braille e-readers, which can cost thousands of dollars, the device is slated to cost less than $300 when it hits the U.S. market in December.

But Dot envisions bringing braille beyond the wrist. The inventors have tested braille screen modules at ATMs and train stations, programming them to display information that regularly changes, such as account balances or train schedules. After the wearable’s launch in December, the startup will shift towards the public sector, which it anticipates could be its largest market.

Here’s How Microsoft Is Making 3D Videos For HoloLens

$
0
0

When Microsoft unveiled holographic Minecraft on HoloLens at this year’s E3, the crowd let out a huge cheer. Now Microsoft has released a video showing how it creates its stunning high quality free-viewpoint video by capturing a traditional Māori haka war dance. Other action that they grabbed includes martial arts, break dancing, and a little boy destroying some Solo cups with an axe.

The HoloLens headset mixes reality with impressive and sometimes lifelike holograms that users can interact with. To capture dynamic actions in such great detail, Microsoft created a huge TV studio in Redmond, WA. The space is outfitted with a large calibrated green screen and 106 synchronized RGB and infrared cameras. All of these cameras take in information at various angles and compile it together so that they can create realistic three-dimensional models and spaces. And unlike the process that was required to capture the performance of Gollum in Lord of the Rings, no special green screen suit is required by the performer.

The cameras turn the action into a 3D point cloud, then algorithms further refine the model into tens of thousands of points, which is ultimately reduced to thousands of triangles per frame. Greater detail is put in places such as hands and faces before texture is finally added.

Although the HoloLens has been criticized for having a limited field of view, these incredibly detailed performances will undoubtedly change the way we consume entertainment in the future.

Far away from the Redmond studio, astronauts on the International Space Station will eventually test HoloLens in hopes that they will soon be able to give people a firsthand look at what they’re seeing. If the visuals are anywhere near as awe-inspiring as what astronaut Scott Kelly posts on Twitter, they're bound to be popular.

How Artificial Intelligence Can Make Drugs Better and Faster

$
0
0

When researchers used to try to diagnose and treat diseases, they would often search for one mutation on a single gene that was causing the problem. Or maybe they would look for average effects of a mutation that led to a disease across the entire population. But these approaches ignored the complexities and specifics that truly give rise to disease — demographic information, proteins, multi-gene interactions, environmental effects, and a whole host of other facets.

Until recently, computers weren’t powerful enough to be able to analyze all of this health information, nor were there large enough datasets to test. But the rise of Artificial Intelligence (AI) can tease out interactions from big health data that is emerging from the ability to quickly sequence entire genomes and gather more molecular information than ever before. AI could make precision medicine a reality, since it will hopefully one day be able to identify the unique characteristics an individual has that could lead to certain diseases, and how to treat them.

“That’s what precision medicine is all about. Each of us is different and each of us is genetically unique, so each of us should have a treatment that’s tailored to our individual genetic makeup and our individual environmental history,” said Jason H. Moore, Chief of the Division of Informatics at the University of Pennsylvania. “So I think that’s where artificial intelligence has a very important role to play, is being able to put together multiple genetic and environmental factors to identify the important subgroups.”

Two researchers, including Moore, presented their approaches using health AI during the Leveraging Big Data and Predictive Knowledge to Fight Disease conference at the New York Academy of Sciences on Tuesday. Health AI is essentially getting computers to think about genomics, diseases, and treatments like humans do but in a much faster, more powerful way, and on a larger scale.

One of the most exciting applications for AI is identifying new targets for drugs that previous methods have missed. Since developing a single drug takes on average up to 14 years and $2.6 billion, pharmaceutical companies would like to do anything they can to decrease that time and cost.

Dr. Niven Narain, Co-Founder, President and Chief Technology Officer for biopharma company Berg, discussed his company’s Interrogative Biology AI platform that has identified several drug targets that are in development and at least 25 more that are in the pipeline. Berg’s platform pulls together as much data on individual patients as possible — from demographic information and environmental conditions to genetic mutations — in order to tease out opportunities for new treatments. He said Berg’s method has cut the time and money required to develop drugs by more than half.

“It’s not only that we’re reducing the time to produce the drug; the drug that’s produced is going to have more of an impact,” Narain said. “That’s also a metric that needs to be intangibly appreciated, because you could get things done faster [using current drug development methods], but it’s only going to help 10,000 people. But if you get it done faster [with AI] and you’re helping 10 million people, that’s a big difference.”

Using their AI system, EMERGENT, Moore’s lab discovered five new biomarkers that could be potential drug targets for the eye disorder glaucoma. To do this, he said, they input patient data for 2,300 healthy and unhealthy individuals, information on over 600,000 specific DNA sequences, and knowledge of specific gene interactions into EMERGENT. One of the DNA sequences the AI system identified was one known to affect glaucoma, and the other five are new opportunities for drug development.

Next, Moore said his group is working on developing better ways to visualize the data that AI computers spit out — the results can’t be helpful unless biologists can interpret what they mean and how they can be used. His group is actually using the video game platform Unity 3D to develop apps that could eventually allow researchers to fully immerse themselves in their data and AI algorithms inside a gaming system.

“Imagine all your big data lives in a video game, and you’re flying through it and you see something interesting. What you want to be able to do from within the visualization is say, ‘Aha, that looks interesting,’ and push a button, and have an analysis run on that piece of the data that you’ve seen and have the result come back in real time. And then you can fly through, see something else, push a button and get an analytical result. So you want the analysis to be intertwined with the visualization. I think that will revolutionize how we analyze big data.”

But Moore thinks it will likely take at least two decades before AI becomes accessible and interpretable enough to fully reach its potential. Narain said the first applications of AI in medicine could come in the next three to four years, particularly because the U.S. Federal Drug Administration and insurance companies are starting to encourage the use of big data in making health care decisions.

“I think AI is what is going to drive this voluminous amount of information into going from data to knowledge, and from knowledge to products,” Narain said. “AI’s going to help speed that process up, and help to remove the noise from what the real, true signal is. And that signal’s going to really drive processes.”

8-Year-Old Boy Receives Double Hand Transplant

$
0
0

Courtesy of Children's Hospital of Philadelphia

Zion Harvey had gotten used to living without his hands—he could eat, write, and play videogames just like any other eight-year-old. But he still couldn’t throw a football. Earlier this month, Harvey became the first child to receive a bilateral hand transplant; judging from a press conference conducted yesterday, he’s doing better than ever.

When Harvey was two years old, he had a terrible infection. His doctors had to amputate his hands and feet, and he received a kidney transplant. He’s able to walk, thanks to prosthetic feet. Ironically, the kidney transplant made Harvey a good candidate for the hand transplant—he’s already on immunosuppressant drugs, so he ran less risk of rejecting the new hands.

Zion Harvey, before surgery

Courtesy of the Children's Hospital of Philadelphia

Harvey’s doctors at the Children’s Hospital of Philadelphia (CHOP) worked with nonprofit Gift of Life to find a donor, a boy about Harvey’s age. The 10-hour procedure was complex: the bone, blood vessels, nerves, muscles, tendons and skin of the donor’s hand were connected to Harvey’s forearms. The bones came first, connected by steel plates and screws, and then the surgeons connected the arteries and veins using microvascular techniques. With the blood flow established, surgeons connected the muscles and tendons, then reattached nerves and sewed up the surgical sites.

A few weeks after the procedure, Harvey seems to be doing great. He’s still taking immunosuppressant drugs and does rigorous hand therapy sessions at CHOP several times per day. If everything goes as planned, he will be allowed to go home to Maryland in a few weeks. The doctors expect that Harvey’s new hands will grow as he does.

Watch A Drone Gently Deliver A Package

$
0
0

Workhorse Group Delivery Drone In Flight

Workhorse Group Delivery Drone In Flight

Screenshot by author, from YouTube

Delivery by drone is a big promise of the future, with small flying robots carrying goods from warehouse or truck right to customer doorsteps. It’s easy to make a gimmick delivery drone, one that haphazardly carries a burger or ice cream and then releases it onto the people below. Getting drone delivery work right beyond the simple gimmick is hard work, so this footage of a test from the Workhorse Group is nice, showing a delivery method that feels plausible, practical, and still in the process of being refined.

Watch below:

Such simplicity! Such grace! A package set on the ground from a very low height, with its contents likely intact. Workhorse sees the drone not as a full delivery system itself, but as an extra tool that flies from truck to doorstep. It’s a modest, achievable goal for drone delivery, and one that likely doesn’t require rewriting the entirety of regulations for the sky. Instead, they’ve filed with the FAA for authorization to use truck-based drones.

[IEEESpectrum]

Take A Panoramic Tour Of The International Space Station

$
0
0

Samantha Cristoforetti explains how the Columbus hatch works in the ISS

European Space Agency astronaut Samantha Cristoforetti spent 199 days conducting experiments in the International Space Station. Before she came back down to Earth in June, she snapped a series of photos inside the ISS, which the ESA has stitched together to make an explorable panorama.

As you navigate the ISS, you can click on written or video descriptions of an item, recorded by Samantha. The panorama allows you to explore all of the ISS except the Russian modules. The ESA says that the full ISS will be available later this year.

To check out the tour, click here.


Google Translate Adds 20 Languages To Augmented Reality App

$
0
0

Google, the company that ferries you to the internet and might soon ferry you to work, has expanded its toolbox of internet gadgets, adding new functionality to their Translate app: real-time text translation of 20 new languages via the camera. It’s another step towards either true augmented reality or John Carpenter’s They Live, but no matter what it’s very cool.

The ever-hip programmers at Google even put together a little video, translating 1958 hit single La Bamba. The video showcases the app’s versatility: the camera instantly translates languages like Bulgarian, Catalan, and Filipino to and from English.

The app is available for Android and iOS for free.

Stride 2015: China's Best Troops Take On A Grueling Combat Simulation

$
0
0

Blue Force China Zhurihe 195th Mechanized Brigade

The Blue Force

www.news.cn

The PLA's 195th Mechanized Infantry Brigade, commonly referred to as "Blue Force" (Red is the friendly color in China) is China's go to experts on western land tactics.

Stride 2015 is this year's annual exercise where Chinese mechanized brigades are rotated to the Zhurihe Training Base in Inner Mongolia, to be pitted in grueling simulated combat against Zhurihe's resident "Blue Force". The Blue Force, the 195th Mechanized Infantry Brigade, is supposed to simulate the tactics and operations of NATO ground forces like the U.S. Army; the exercise moderators supply them with the location of visiting forces, judicious airstrikes, and the occasional nuclear strike. It's a tough slog. In Stride 2014, only one visiting brigade was able to defeat the Blue Force, at the cost of 50% casualties.

China Zhurihe 2015 Tank

Hero Tank

CCTV 7

One of the Red Force tanks, a ZTZ-59, took the initiative after losing contact with its main force, destroying several blue force tanks and infantry formations, while dodging 10 anti-tank missiles, and then using their disabled tank as a roadblock. Their war exercise exploits were enough to earn them a spot on a CCTV news boardcast. (Thanks to Hongjian for uncovering the clip).

This year, ten "Red Force" visiting brigades have been selected from China's seven military regions; Beijing, Chengdu, Guangzhou, Jinan, Lanzhou, Nanjing and Shenyang. Compared to more structured previous Chinese military exercises, Zhurihe focuses on finding deficiencies in PLA ground combat tactics, especially in the ability of mid level officers and NCOs to take the initiative in responding to battlefield setbacks. While the older Blue Force ZTZ-59 tanks and ZSD-63 armored vehicles incongruously represent U.S. M1A2 tanks and M2 Bradley infantry fighting vehicles, they're made lethal through Blue Force's in depth study of NATO mechanized operations.

China Zhurihe Laser 2015

Laser Tag for Keeps

www.top81.cn via lt.cdjby.net

Laser beam designators (like the one installed in the barrel of this PF rocket launcher) and receivers are an essential part of modern army exercises. While not as flashy as live fire exercises, they allow for actual engagements such as tank on tank, not to mention providing more data, increased safety and being cheaper (no need to worry about for pay for live ammunition).

To simulate real-time combat without actually blowing up tanks, each infantry man and vehicle at Zhurihe is equipped with a location transponder, laser transmitters, and laser receiver, similar to the U.S. Army's MILES training kit. Laser transmitters are mounted on gun and cannon barrels to be "fired" at enemy units and personnel; a successful hit will be registered by said unit's laser receiver. Transponders allow for the exercise referees to radio in the location of Red Forces to the Blue Force, simulating the usage of enemy drones and spy satellites.

China Zhurihe 2015 Taipei

Eye for the Headlines

www.top81.cn

Red Force infantry storm a civilian building during Stride 2015. The building, built in the Renaissance-Baroque style of east Asian public buildings, has drawn international attention because of its resemblance to the Presidential Office in Taipei, Taiwan. Also note that the streetlights around the building have solar panels installed.

A minor controversy made the Internet rounds this year, as footage has emerged of Red Force infantry operating around, and apparently storming a civilian building that bears a resemblance to the ROC Presidential Office in Taipei, Taiwan. (In fairness, many early 20th century east Asian government buildings share the same front tower, double courtyard layout.)

China Zhurihe 2015 Tank 26th Group Army

26th Group Army

Xinhui, via China Defense Forum

Armored vehicles from the 26th Group Army Red Force brigade manuver in Zhurihe, Inner Mongolia. The communications vehicle on the ZDB-97 IFV chassis in front (identifiable by its large topside communications dome) is an integral part of bandwidth heavy modern Chinese land warfare.

Stride 2015 is far from the only summer exercise Chinese forces have engaged in. The PLA's artillery units got their national chance to shine in Firepower 2015, while the PLAN launched a large scale amphibious landing exercise using modern hovercraft earlier this month. As PLA military exercises become more realistic, inventive and realistic, the Chinese are obviously sparing no pains to prepare for the worst.

You may also be interested in:

China Practices Tropical D-Days with Tanks and Hovercraft

China Builds the World's Fastest Tank Gun, then Tries to Hide It

China Mobilizes Forces on Burmese Border

Biggest "Anti-Terrorist" Exercise in the World Stars Chinese Drones, Russian Troops and an Ukraine-Inspired Wargame

China Joins the Tank Biathlon, the "Sport" of Main Battle Tanks

Chinese Special Forces Take 1st, 2nd and 4th Place at 'Olympics' for Elite Warriors

Smart Rifle's Software Can Be Hacked To Shoot Off-Target

$
0
0

TrackingPoint Rifle

TrackingPoint Rifle

Screenshot by author, from YouTube

TrackingPoint's rifles are clever pieces of technology, merging cameras, sensors, and Linux software with a sniper rifle to create a gun very good at hitting targets far away, even when fired by an untrained shooter. Primarily marketed to hunters, last year the U.S. Army was also rumored to be evaluating them. A rifle that relies on software comes with a new risk: it can be hacked. Security researchers and wife-and-husband duo Runa Sandvik and Michael Auger have demonstrated a successful hack, fooling the rifle into software into misdirecting the bullet.

TrackingPoint’s rifle can connect to WiFi, allowing a computer to stream video from the rifle’s camera as the shooter points down the scope. This also allows the shooter to adjust the settings of the targeting system for certain varying qualities, such as bullet weight or windspeed.

When functioning normally, the shooter points the rifle at the target and hits a button marking exactly where they want the bullet to hit. This sets a crosshair on the rifle's scope. Finally, the the shooter adjusts the orientation of the rifle, making sure the bullet will actually go the intended direction, then fires.

TrackingPoint is designed to account for wind speed, temperature, distance, bullet weight, and other factors that make the math of shooting hard to do on the fly, so when working as intended, it makes for very accurate shots. When hacked, however, the rifle's settings can be radically adjusted without the shooter knowing about it, causing a shot to fly off chaotically, in very different directions than intended. As demonstrated, a bullet fired from the hacked rifle can easily miss by feet, potentially hitting something or someone located relatively far from the intended target.

The human still aims the rifle, so this hack isn’t quite as dangerous as turning the gun on its users. Besides making the aim on the rifle worse, the hacker can either keep the original user from accessing the software or just erasing it completely. That changes a high-tech, auto-aiming rifle into a regular dumb rifle with a heavy camera and computer system on top. It can even stop the gun from firing at all.

Lest you think this is easy, its important to note that the WiFi on the TrackingPoint rifle is off by default, and there are some modest security measures in place, like user identification numbers and default passwords, which pose obstacles to the would-be hacker. But a determined effort could readily get through.

Despite the potential for hacking, it’s hard to see exactly how it could become relevant. Someone who could get close enough to access the rifle by WufU is also well within range to destroy it with a small explosive or attack its user with a handgun. In professions like the armed forces where having an unhackable, super-accurate shot is required, there already exists an option we’ve had for centuries: well trained human snipers, using guns that don’t need software.

[Wired]

New Sun-Blocking Material Uses Compounds From Algae And Fish

$
0
0

Mycosporine-based sun-blocking film

Researchers have used compounds found in algae and reef fish mucus to create a material that naturally blocks harmful UV rays, according to a paper published recently in ACS Applied Materials & Interfaces.

The sunscreen you buy at your local pharmacy contains ingredients to block two different types of light from the sun—UV-A, which has longer wavelengths and can cause cancer over time, and UV-B, with shorter wavelengths that cause sunburns. But there are concerns about some of the chemicals in commercial sunscreens, which may disrupt some of the body's more delicate systems if they find their way inside.

But there’s a natural compound that blocks both types of UV rays, called mycosporines. Mycosporines absorb both types of light, and would be safe if ingested. Researchers have wanted to use mycosporines in sunblock for more than a decade, but they weren’t so easy to fix in place—when scientists put them in a liquid sunscreen for people to put on their skin, the mycosporines would smear and dribble away so that they were largely ineffective.

Now researchers have figured out how to fix mycosporines in place by putting them around a polymer scaffolding—for this experiment, they used chitosan, a material derived from shrimp and crab shells and found in a huge range of commercial products, but plenty of other polymers would work just as well, they note.

The material could absorb UV-B rays 192 percent more effectively than most commercial sunscreens, and the film was stable after 12 hours of sun exposure or temperatures up to 176 degrees Fahrenheit. These qualities make the material a good candidate for a range of applications on biological and nonbiological materials. Most immediately, the film could be used in clothes and outdoor furniture, both of which can be damaged by too much sun exposure.

Presumably the researchers would hope to reach the biggest possible market with a biocompatible sunscreen: human skin. Though the Food and Drug Administration (FDA) has been slow to approve new sunscreens in the past, Chemical and Engineering News notes, a sunscreen made from mycosporines might be easier to approve because its sun-blocking components are all found in nature.

Researchers Successfully Transport Blood By Drone

$
0
0

Testing The Blood Drone

Testing The Blood Drone

Johns Hopkins Medicine

Pathologist Timothy Amukele, left, teamed with Robert Chalmers and other engineers to create a drone courier system that transports blood to diagnostic laboratories.

Would you trust a drone with your blood? A new study by Johns Hopkins shows that, at least for testing purposes, a small drone can safely transport a small amount of blood without damaging it. The study was a proof of concept, with perhaps the secondary goal of getting “blood” and “drone” into a headline together. It’s also potential good news for patients who need medical care in rural areas, as safely transporting blood through the sky spares the dangers or delays due to impassable roads.

The study was done as a collaboration between Johns Hopkins and Uganda's Makerere University, and headed by Johns Hopkins pathologist Timothy Amukele. Blood can be damaged in transport, but the drone flight didn't appear to harm it. From the release about the study:

Of particular concern related to the use of drones, Amukele notes, is the sudden acceleration that marks the launch of the vehicle and the jostling when the drone lands on its belly. "Such movements could have destroyed blood cells or prompted blood to coagulate, and I thought all kinds of blood tests might be affected, but our study shows they weren't, so that was cool," he says.

To test the impact of travel in a drone on the blood, the researchers took over 300 samples of blood (six each from 56 volunteers), and drove them to a site an hour away. Then half the blood samples were packaged for drone flights, and flown in the air between six and 38 minutes in a hand-tossed drone. After their flights the samples were unloaded, then all the samples--including the ones that didn't take a trip in the drone--were driven back to the hospital for testing, where they were tested normally. No meaningful differences were found between flown and unflown samples.

With the proof of concept done, future research could test the idea in rural areas, where drones could deliver medicine to testing centers far away, and more quickly than by car or on foot.

Viewing all 20161 articles
Browse latest View live


Latest Images