Wednesday, September 29, 2010
Thursday, September 23, 2010
Monday, September 20, 2010
Tuesday, September 14, 2010
Thursday, September 09, 2010
The Cretaceous period's carnivorous answer to a camel has been unearthed in Europe after 130 million years, a new study say.
The new, hunchbacked species of dinosaur sprouted spiky, featherlike shafts on its arms; was probably a powerful runner; and likely ate small dinosaurs, crocodiles, and early mammals, researchers say.
Discovered via a finely preserved, nearly complete skeleton found in central Spain, the 20-foot-long (6-meter-long) Concavenator corcovatus—"the hunchback hunter from Cuenca"—had two raised backbones, each 1.3 feet (40 centimeters) taller than the dinosaurs' other vertebrae.
C. corcovatus's hump possibly supported a mound of fleshy tissue storing fat, as on a camel, according to the study team, led by paleontologist Francisco Ortega of the Universidad Nacional de Educacíon a Distancia in Madrid.
Alternatively, the hump might have had a display role—for example, attracting a mate or intimidating rivals—or may have helped diffuse heat and regulate body temperature, Ortega said.
I refuse to call it a camelsaur.
So, being a carcharodontosaurid spread to likely point that quills if not feathers evolved, right?
Wednesday, September 08, 2010
Thursday, September 02, 2010
Computational scientists and geophysicists at the University of Texas at Austin and the California Institute of Technology (Caltech) have developed new computer algorithms that for the first time allow for the simultaneous modeling of the earth's mantle flow, large-scale tectonic plate motions, and the behavior of individual fault zones, to produce an unprecedented view of plate tectonics and the forces that drive it.
A paper describing the whole-earth model and its underlying algorithms will be published in the August 27 issue of the journal Science and also featured on the cover.
The work "illustrates the interplay between making important advances in science and pushing the envelope of computational science," says Michael Gurnis, the John E. and Hazel S. Smits Professor of Geophysics, director of the Caltech Seismological Laboratory, and a coauthor of the Science paper.
To create the new model, computational scientists at Texas's Institute for Computational Engineering and Sciences (ICES)—a team that included Omar Ghattas, the John A. and Katherine G. Jackson Chair in Computational Geosciences and professor of geological sciences and mechanical engineering, and research associates Georg Stadler and Carsten Burstedde—pushed the envelope of a computational technique known as Adaptive Mesh Refinement (AMR).
Partial differential equations such as those describing mantle flow are solved by subdividing the region of interest (such as the mantle) into a computational grid. Ordinarily, the resolution is kept the same throughout the grid. However, many problems feature small-scale dynamics that are found only in limited regions. "AMR methods adaptively create finer resolution only where it's needed," explains Ghattas. "This leads to huge reductions in the number of grid points, making possible simulations that were previously out of reach.”
"The complexity of managing adaptivity among thousands of processors, however, has meant that current AMR algorithms have not scaled well on modern petascale supercomputers," he adds. Petascale computers are capable of one million billion operations per second. To overcome this long-standing problem, the group developed new algorithms that, Burstedde says, "allows for adaptivity in a way that scales to the hundreds of thousands of processor cores of the largest supercomputers available today."
With the new algorithms, the scientists were able to simulate global mantle flow and how it manifests as plate tectonics and the motion of individual faults. According to Stadler, the AMR algorithms reduced the size of the simulations by a factor of 5,000, permitting them to fit on fewer than 10,000 processors and run overnight on the Ranger supercomputer at the National Science Foundation (NSF)-supported Texas Advanced Computing Center.
A key to the model was the incorporation of data on a multitude of scales. "Many natural processes display a multitude of phenomena on a wide range of scales, from small to large," Gurnis explains. For example, at the largest scale—that of the whole earth—the movement of the surface tectonic plates is a manifestation of a giant heat engine, driven by the convection of the mantle below. The boundaries between the plates, however, are composed of many hundreds to thousands of individual faults, which together constitute active fault zones. "The individual fault zones play a critical role in how the whole planet works," he says, "and if you can't simulate the fault zones, you can't simulate plate movement"—and, in turn, you can't simulate the dynamics of the whole planet.
In the new model, the researchers were able to resolve the largest fault zones, creating a mesh with a resolution of about one kilometer near the plate boundaries. Included in the simulation were seismological data as well as data pertaining to the temperature of the rocks, their density, and their viscosity—or how strong or weak the rocks are, which affects how easily they deform. That deformation is nonlinear—with simple changes producing unexpected and complex effects.
"Normally, when you hit a baseball with a bat, the properties of the bat don't change—it won't turn to Silly Putty. In the earth, the properties do change, which creates an exciting computational problem," says Gurnis. "If the system is too nonlinear, the earth becomes too mushy; if it's not nonlinear enough, plates won't move. We need to hit the 'sweet spot.'"
After crunching through the data for 100,000 hours of processing time per run, the model returned an estimate of the motion of both large tectonic plates and smaller microplates—including their speed and direction. The results were remarkably close to observed plate movements.
Paper here, I believe.
Wednesday, September 01, 2010
With the impending retirement of NASA's space shuttle fleet, aerospace juggernaut Boeing is hard at work developing a new capsule-based spaceship that could be ready for its first commercial spaceflight by 2015.
Boeing's new Crew Space Transportation-100 spacecraft is designed to fly astronauts to and from the International Space Station (ISS), as well as future private space stations.
Keith Reiley, Boeing's commercial crew development program manager, will be presenting updates on the Commercial Crew Transportation System at the American Institute of Aeronautics and Astronautic Space 2010 Conference and Exposition next week in Anaheim, Calif.
As one of the leading suppliers of human space systems and services, Boeing already has a strong heritage in the industry. [Video: Boeing's New Spacecraft]
"It was an enormous advantage," Reiley told SPACE.com. "A lot of the equipment we're looking at has ISS heritage. About half of our team were designers that came from ISS and had experience with the flight hardware. The other half were space shuttle designers."
To help reach its goal, the company looked to existing facilities, launchers and proven processes to ensure safety, lower development costs and reduce overall risk.
Boeing's CST-100 spacecraft is approximately 15 feet (4.5 meters) wide and can carry up to seven people. The cone-shaped capsule will look similar to NASA's Apollo and Orion spacecraft.
Boeing settled on the cone-shaped design because it was thought to be the safest and most inexpensive of the vehicle concepts that were considered, Reiley said.
The spacecraft is being designed for compatibility with a variety of rockets, including United Launch Alliance's Atlas and Delta boosters and SpaceX's Falcon rockets. This will give Boeing the flexibility to select an appropriate rocket later in the development process.
The spacecraft will also be equipped with a unique pusher abort system in case the crew encounters an emergency during launch.
"This is the first time anyone has proposed or succeeded with a pusher design," Reiley said. "The pusher appears, to us, to be simpler, less expensive and just as safe."
If necessary, the launch abort system would fire pressurized propellant for three seconds to quickly push the vehicle away from the rocket. A parachute would then be deployed to assist with the landing.
One of the advantages of the pusher design is that in the event of a smooth launch, the same propellant can also be used on orbit, either in guiding the CST-100 to dock with a space station, or to boost stations themselves, whose orbits slowly decay over time.
"You get the ability to use the propellant to re-boost our customer stations or simply for orbital maneuvering to get there," Reiley said. "In order to catch up with the station you're trying to rendezvous with, you have to boost yourself up to the station's orbit, and all that takes a certain amount of fuel."
For Boeing, one of their main challenges in expanding their branch of commercial spaceflight is in designing a relatively inexpensive option.
The company has set a design requirement that the CST-100 be reusable up to 10 times. The exact number of times the capsule is reused, however, will depend upon inspection after touchdown.