Tuesday, June 30, 2015

The Richest Reef: Time to Call It a Day

One of the 2015 expedition’s final sunsets at Anilao, on the west coast of Luzon.

Throughout this seven-week expedition, nearly 50 team members from the U.S. and the Philippines have explored the biological richness of the on the planet. They sampled mangrove thickets and eelgrass shallows. They examined ocean bottoms covered by little more than sand and rubble, and reefs crowded with multicolored corals. They ventured to depths beyond 400 feet, where light scarcely penetrates and where bizarre, resourceful creatures find a way to make a living despite the limitations. And now, the expedition has come to a close.

[embedded content]

Late last week, several members of the team took a break from analyzing and processing their recent discoveries, and planning upcoming research trips, to chat with ’s Blogs Editor, , about their perspectives on this year’s expedition, and what comes next. The live event was part of the ’s series and featured and , two of the expedition’s scientific leads, Elliott Jessup, the head of the ’s diving program, and , one of the Academy’s collection managers charged with making sense of the specimens and data that have come back from the field. If you missed the live event, you can check out the recorded version (embedded above) via Google Science Fair’s YouTube channel.

A spectacular new species of nudibranch discovered in the Verde Island Passage. (Photo by Terry Gosliner)

As you know if you’ve followed this series, what the scientists discovered in the wide variety of habitats they explored was nothing short of spectacular. An overwhelming abundance of organisms and layered diversity were recurring themes on each and every dive. Novelty, though, is what seems to interest people most. A question the scientists get a lot at the end of an endeavor like this one is, “How many new species did you find?”

Benthic ctenophores (comb jellies) collected from the twilight zone near Anilao. (Photo by Bart Shepherd)

As simple as it might sound, this can be a difficult question to answer depending on the taxonomic group one studies. For example, Gosliner and Rocha tend to know right away when they have a new species of or fish on their hands. In contrast, a new algae, or , or might require months of careful study to distinguish it from other known species.

So here’s what we know just a few weeks since the field lab in was packed up and returned to its open-air restaurant status: During the nearly 1,200 scientific dives conducted on this expedition, the team discovered approximately 100 new species—a number that may increase significantly as specimen analysis deepens and intensifies. Of the 15 live fish that were collected from twilight zone depths and brought up via s, every one survived and made it safely to the Academy’s in San Francisco, where they’re waiting to go on exhibit in a new twilight zone exhibition planned for summer 2016. So did a small assortment of strange and colorful benthic that have become unlikely favorites of several of the aquarium biologists.

A twilight zone diver and two support divers decompressing off the coast of Verde Island. (Photo by Bart Shepherd)

All told, the research team collected some 18,000 individual scientific specimens, which sounds like an awful lot. But, as I wrote in the , those collections, which include many organisms you can hardly see without a microscope, don’t add up to much in the way of volume or weight. And the number itself is tiny in comparison to the tremendous scientific value these specimens will provide for decades to come—and the impact that our knowledge and understanding of these plants and animals might have on efforts to protect the richest of reefs and the habitats that support them.

If you watched the hangout referenced above, you caught a brief glimpse of just how passionate these scientists are about the work they do, and you have a sense of what an eye-opening experience it was for me to follow them into the field and underwater to see them in action and learn why this type of research matters. While every member of the team was happy to call it a day and travel home at the end of a long, exhausting expedition, they will all tell you that there is much left to be done to understand and protect this extraordinary place, and that they can’t wait to come back and continue exploring Earth’s richest reef.

The Richest Reef: To Collect or Not to Collect?

Bird Literally Weighs Its Food Options

Mexican Jays compare peanuts to determine which one has the most meat inside before choosing one for a meal. Karen Hopkin reports.

By | |

If you’ve ever been to an all-you-can-eat buffet, you know how important it is to . You don’t want to fill up on salad when so many await. It seems some birds also weigh their mealtime choices—literally. A study finds that Mexican Jays pick up and shake peanuts to assess their relative heft before choosing one. That report is served up in the . [Piotr G. Jablonski et al, ]

Foods that hide their edible bits on the inside present a challenge to hungry diners. How can you or which shells harbor the biggest nuts? We humans knock on melons or squeeze avocados. But how do other species select the highest quality snacks?

To find out how the jays do it, researchers fiddled with their feed. First they doctored peanuts so that some contained three nuts while others had none. When they offered these pods to some jays, the birds turned their beaks up at the empty shells and instead chose those that were full. And when the jays were allowed to choose between normal peanuts and those that weighed just one gram more, because the researchers had stuffed them with clay, the birds again went for the heavier meal.

Videos revealed that the jays shake the nuts before selecting one, which apparently lets them gauge the nut’s mass and perhaps also listen for the rattle of a well-packed shell. Pretty clever for a bird brain.

—Karen Hopkin

Mexican Jay sounds courtesy of Cornell Laboratory of Ornithology

Liberia Records Ebola Death after Country Declared Virus-Free

A Liberian has died of Ebola in the first recorded case of the disease since a country at the heart of an epidemic that has killed more than 11,000 people was declared virus-free on May 9 after going 42 days without a new case.

June 30, 2015

|

By Alphonso Toweh

MONROVIA (Reuters) - A Liberian has died of Ebola in the first recorded case of the disease since a country at the heart of an epidemic that has killed more than 11,000 people was declared virus-free on May 9 after going 42 days without a new case.

The body of a 17-year-old tested positive for Ebola in Margibi County and authorities have begun tracing people the victim may have come into contact with while infected, Deputy Health Minister Tolbert Nyenswah said on Tuesday.

"There is no need to panic. The corpse has been buried and our contact tracing has started work," Nyenswah told Reuters. Margibi is a rural area close to the capital Monrovia, and is home to the country's main international airport.

A total of 11,207 people died from Ebola in Liberia, neighboring Guinea and Sierra Leone since the outbreak began in December 2013, World Health Organization (WHO) spokesman Tarik Jasarevic told a news conference in Geneva.

Around 43 percent of those deaths were in Liberia, where the world's worst outbreak of the disease peaked between last August and October with hundreds of cases a week.

New incidences have tapered this year, with 12 new confirmed cases reported in Guinea and eight in Sierra Leone in the week to June 21, according to WHO figures. Even so, health officials urge vigilance to prevent a resurgence of the disease.

The new case will test Liberia's response capacity at a time when international health organizations have wound down their presence in the affected countries, said Fatoumata Lejeune-Kaba, spokeswoman for the U.N. Ebola response mission.

Liberia fought Ebola at a community level, adopting regular hand-washing and the safe burial of bodies among other measures and the discovery of the new case shows that systems for testing remain in place, she said.

"This should have been expected because as long as there is Ebola in the region no one country can be safe. Liberia is vulnerable because of Guinea and Sierra Leone."

Ebola damaged the health care systems and economies of the three West African countries and caused global alarm that peaked in September and October when isolated cases were confirmed in countries such as the United States and Spain.

Nigeria, Senegal and Mali also recorded at least one case each before ending the epidemics in their countries.

see also:

Cuba Named 1st Country to End Mother-to-Child HIV Transmission

The World Health Organization credited Cuba with offering women early access to prenatal care, HIV and syphilis testing, and treatment for mothers who test positive

June 30, 2015

|

HAVANA, June 30 (Reuters) - The World Health Organization on Tuesday declared Cuba the first country in the world to eliminate the transmission of HIV and syphilis from mother to child.

The WHO said in a statement that an international delegation that it and the Pan American Health Organization sent to Cuba in March determined the country met the criteria for the designation. In 2013, only two children in Cuba were born with HIV and five with syphilis, the statement said.

"Cuba's success demonstrates that universal access and universal health coverage are feasible and indeed are the key to success, even against challenges as daunting as HIV," PAHO Director Carissa Etienne said in the statement.

Cuba's Communist government considers its free healthcare a major achievement of the 1959 revolution, although ordinary Cubans complain of a decline in standards since the fall of the Soviet Union, the country's former benefactor, in 1991.

The PAHO and WHO credited Cuba with offering women early access to prenatal care, HIV and syphilis testing, and treatment for mothers who test positive. The two organizations began an effort to end congenital transmission of HIV and syphilis in Cuba and other countries in the Americas in 2010. (Reporting by Jaime Hamre; Editing by Daniel Trotta and Lisa Von Ahn)

see also:

Step Aside, Freud: Josef Breuer Is the True Father of Modern Psychotherapy

The Viennese physician (1842-1925) has a unique and prominent place in the history of psychotherapy. From 1880-82, while treating a patient known as Anna O., Breuer developed the cathartic method, or , for treating nervous disorders. As a result of that treatment, he formulated many of the key concepts that laid the foundation for modern psychotherapy. This month marked the 90th anniversary of Breuer’s death, offering an opportunity to reflect on the value of his contributions.

Breuer is best known for his collaboration with Sigmund Freud and for introducing Freud to the case of Anna O. (whose real name was ). The ideas emerging from that case so fascinated Freud that he devoted the rest of his career to developing them, in the form of psychoanalysis. The two men co-authored , published in 1895, which is considered the founding text of psychoanalysis. However, the significance of Breuer’s contributions goes well beyond his role as Freud’s mentor and collaborator. In fact, Breuer laid the groundwork for modern talk therapy by, for example, considering all aspects of his patients's life and personality and focusing on emotional expression as opposed to the Freudian emphasis on insight and interpretation.

I discovered Breuer early in my training as a therapist, after I realized that helping my clients gain insight into their problems, as the principal focus of treatment, was rarely effective in causing fundamental change. I found Freud’s technique of free association unhelpful, because many clients who are anxious or depressed have difficulty associating freely. The most therapeutic sessions were the ones that elicited an emotional response from my clients. If I could guide them to access feelings and memories, relevant to their area of concern, they would often report a sense of something shifting inside them, which dramatically accelerated the process of growth and change.

Credit:

I wanted to learn how to elicit those types of experiences consistently and began to explore techniques such as hypnosis, mindfulness and , all of which involve subtle shifts in the client’s state of awareness. While studying the literature to understand the nature of these changes, I was led to Breuer’s description of the cathartic method and his work with Anna O. in . Breuer’s ideas were strikingly relevant to modern views of therapy, and my work with clients, and I was surprised they were not more widely known.

Breuer’s theoretical essay in repays close reading, as many of the observations in it are remarkably prescient. The essay is more than sixty pages long and provides a comprehensive account of the nature, cause and treatment of mental illness with astonishing clarity, rigor and depth of insight. In 1955, James Strachey, the English translator of , described the essay as “very far from being out of date; on the contrary, it conceals thoughts and suggestions which have even now not been turned to sufficient account.” His statement is just as true today.

According to Breuer’s theory of hysteria, the illness begins when someone is exposed to psychic trauma, which he defined as any situation with a risk of serious physical or emotional injury. If the individual is unable to feel and express the emotions related to the traumatic experience, they are dissociated, that is, isolated in a separate state of consciousness that is inaccessible to ordinary awareness. Here, Breuer acknowledged and built on the pioneering work of French psychiatrist, , who was the first to assert the importance of dissociation in mental illness. Breuer called this altered state of consciousness the , owing to its similarity to the state induced by hypnosis. Recovery and healing require accessing and expressing the dissociated emotions, through catharsis, and integrating them with the ideas in normal consciousness, a process he called associative correction.

If we compare Breuer’s theory with Freud’s formulation of psychoanalysis, there are three main differences: psychic trauma (Breuer) vs. sexual conflict (Freud) as the primary cause of psychopathology, hypnoid states (dissociation) vs. repression (defense) as the primary mechanism, and emotional expression (catharsis) vs. interpretation (analysis) as the primary means of recovery. Ironically, in each of those points, the modern view of psychotherapy has increasingly come to favor Breuer.

A large and growing body of evidence, compiled by researchers such as , points to the central role of trauma in the origin of psychopathology. Understanding the effects of trauma is now a major focus of medical research, driven by the urgent need to find effective . Breuer’s work is also highly relevant to clinical practice. His concept of the hypnoid state, for example, is remarkably similar to, and provides a unifying link between, techniques such as mindfulness, focusing, neurofeedback and (Eye Movement Desensitization and Reprocessing) that are of importance in therapy today.

Sigmund Freud.Wikimedia Commons

The publication of marked the end of the Breuer-Freud collaboration. Freud increasingly grew to believe that conflicts related to sexuality played an essential role in all cases of hysteria. Breuer acknowledged the importance of sexuality but considered it only one of many factors. Instead, Breuer asserted the phenomenon of dissociation due to trauma, which was implicit in his theory of hypnoid states, was more fundamental.

In a letter to the Swiss psychiatrist Auguste Forel in 1907, Breuer wrote, “this immersion in the sexual in theory and practice is not to my taste.” He went on to write, “Freud is a man given to absolute and exclusive formulations: this is a psychical need, which in my opinion, leads to excessive generalization.” Freud for his part was skeptical of the whole concept of hypnoid states. In , he wrote that “Breuer's theory of 'hypnoid states' turned out to be impeding and unnecessary, and it has been dropped by psycho-analysis today.”

Freud also promoted the idea that Breuer was too cautious and conservative to recognize the true importance of sexuality. To support this view, Freud claimed Breuer had abruptly terminated his work with Anna O., and resolved never to work with hysterical patients again, because she developed strong sexual feelings towards him. This view was asserted as fact by Freud’s biographer, Ernest Jones, and came to define the conventional view of the matter.

However, there is no reliable basis for Freud’s claim. Psychoanalyst and Freud biographer writes: “Freud’s version of what happened is simply not true. It is an example of the “resistance” argument that he later used to dismiss everyone who raised questions about his theory of sexuality: They could not accept it because it was too personally threatening.” Freud would later use a similar argument with many of his followers who disagreed with him, including Carl Jung, Alfred Adler, Sandor Ferenczi and Otto Rank. Breger goes on to assert: “The truth is that Breuer did not flee from Bertha but remained involved with her treatment for several years.”

In , Freud wrote: “An intimate friend and a hated enemy have always been necessary requirements of my emotional life. I always knew how to provide myself with both over and over…sometimes the two were united within the same person.” That statement is remarkably descriptive of Freud’s relationship with Breuer.

It is notable that Breuer had been more than a collaborator to Freud, who was 14 years younger, lending him money, referring patients to his practice, and welcoming him into his home. Yale historian , in his biography of Freud, wrote, “His disagreeable grumbling about Breuer in the 1890s is a classic case of ingratitude, the resentment of a proud debtor against his benefactor.”

Breuer never publicly challenged Freud or responded to his criticisms, choosing instead to withdraw from the field of psychology to focus on his medical practice. Freud had the field all to himself and his writings decisively shaped the public view of Breuer, which persists to this day.

Setting aside personal details, the key question is whose ideas were more valid, and in that regard history is squarely on the side of Breuer. Freud’s emphasis on sexuality as the dominant factor shaping human development and causing psychopathology is no longer taken seriously today. Instead, the role of dissociation due to trauma is increasingly recognized as more fundamental. Also, most therapists today realize the importance of helping clients access and integrate painful emotions due to past trauma, which is the essence of Breuer’s cathartic method.

When Breuer developed the cathartic method to treat Anna O., he initiated several radical changes. First, he shifted the focus of therapy from suggestion by the therapist to self-discovery by the patient. Second, he expanded the scope of therapy from a narrow focus on treating symptoms to considering all aspects of the patient’s life and personality, thereby founding psychotherapy as a distinct discipline in its own right. Finally, he was the first person to treat mental illness through the long-term exploration of unconscious conflicts, and invented the talking cure, the treatment approach central to all forms of psychotherapy. While conventional wisdom assigns Freud credit for these achievements, the fact is they were all present in Breuer's treatment of Anna O. before his collaboration with Freud began.

The key to Breuer’s greatness was that he had the intelligence and openness of mind to recognize that his patient had much to teach him, and the humility to value her experience over his authority as a physician. Ninety years after his death, Breuer’s ideas inform and enrich my work with clients every day, reminding me to learn from their perspective, respect the role of trauma and value emotional experience over insight.

Further Reading

  1. . Josef Breuer and Sigmund Freud. Translated from the German and edited by James Strachey. Hogarth Press, 1955.
  2. . Albrecht Hirschmuller, New York University Press, 1978.
  3. . Peter Gay, Macmillan, 1988.
  4. . Louis Breger, Basic Books, 2009.
  5. . Bessel van der Kolk, Viking, 2014.

Superconductivity Record Bolstered by Magnetic Data

Superconducting coils can keep objects in magnetic levitatation with virtually no energy input—but require very low temperatures.

The long-standing quest to find a material that can conduct electricity without resistance at room temperature may have taken a decisive step forward. Scientists in Germany have observed the common molecule hydrogen sulfide superconducting at a record-breaking 203 kelvin (–70 ˚C) when subjected to very high pressures. The result confirms preliminary findings released by the researchers late last year, and is said to be corroborated by data from several other groups.

Some physicists urge caution, however. Ivan Schuller at the University of California in San Diego, says that the results "look promising" but are not yet watertight. However, Antonio Bianconi, director of the Rome International Center for Materials Science Superstripes (RICMASS), thinks that the evidence is compelling. He describes the findings as "the main breakthrough" in the search for a room-temperature superconductor since the 1986 discovery of superconductivity in cuprates—exotic ceramic compounds that exhibit the phenomenon up to 164 K.

Last December, Mikhail Eremets and two other physicists at the Max Planck Institute for Chemistry in Mainz reported that they had discovered hydrogen sulfide superconducting below 190 K. When they placed a 10 micrometre-wide sample of the material in a diamond-anvil cell and subjected it to a pressure of about 1.5 million atmospheres, they found that its electrical resistance  when cooled below the threshold, or 'critical', temperature.

At that time, however, the researchers had not been able to demonstrate a second key characteristic of superconductivity, known as the Meissner effect, in which samples expel a magnetic field when cooled below the critical temperature.

In the latest work, the authors got together with two physicists from the University of Mainz to build a non-magnetic cell and acquire a very sensitive type of magnetometer known as a SQUID. They placed 50 micrometre-wide samples of hydrogen sulfide under pressures of up to 2 million atmospheres in an external magnetic field, and slowly warmed them, starting from a few degrees above absolute zero. They observed the tell-tale sign of the Meissner effect—a sudden increase in the sample's 'magnetization signal'—when the temperature rose past 203 K.

As to why they measured a higher critical temperature than they did last year, the researchers point to possible tiny variations in the samples' crystal structure. (Under conditions of high pressures and low temperatures, hydrogen sulfide is in a solid state.)

Growing acceptance

During discussions at the Ischia meeting, he says, it emerged that some groups in China and Japan had reproduced the results, including the drop in electrical resistance and the Meissner effect. Bianconi will not say who the groups are, explaining that they want to delay announcing their results until Eremets and colleagues have published their findings in a peer-reviewed journal (the papers are available in the arXiv online repository).

Katsuya Shimizu, a physicist at Osaka University in Japan, says that he and his colleagues have confirmed the 190 K electrical transition, using their own refrigerator to hold several samples and cells provided by Eremets.

And Schuller argues that the Mainz group should do further checks to make sure that they have not overlooked "an uncontrolled artefact," such as background noise picked up during the delicate measurements of magnetization.

Eremets and his colleagues propose that the superconductivity is likely to originate in the vibrations of the crystal lattice of H3S, which is created when hydrogen sulfide is compressed. These vibrations bind electrons together in pairs that then move through the lattice without resistance, as described by the Bardeen–Cooper–Schrieffer (BCS) theory that holds true for conventional, low-temperature superconductors.

If so, they point out, other hydrogen compounds might then superconduct at even higher temperatures, and possibly even at room temperature, given that the BCS theory does not place any upper limit on the superconducting transition.

Some theorists, however, are not sure that BCS theory is the correct interpretation. "The question of where the high critical temperature comes from is still wide open in my opinion," says theoretical physicist Jorge Hirsch at the University of California, San Diego.

see also:

How to Reduce Heat Wave Exposure among the Most Vulnerable

Maximum temperatures on May 25, 2015 exceeded 40 degress Celsius in many parts of India.

Most of India experienced extended extreme heat—with peaks up to 47 degrees Celsius in some areas—from mid-May through early June, resulting in a reported 2,500 deaths. Additionally, over 1,000 deaths are now being reported from a heat wave in neighboring Pakistan.

This extreme weather raises important questions about climate change and resilience: How hot was it? What factors contributed to the high death toll? How did this year compare to previous years? Who were the most vulnerable populations? And, most importantly, what lessons can be learned to help reduce the health impacts of future heat waves, in light of increasing weather extremes, poverty and other environmental pressures?

Extreme heat has severe impacts on the . Depending on age and humidity level, prolonged activity in temperatures can lead to heat-related hazards like exhaustion or heat stroke. Media reports cited a temperature of 47 degrees C in the northern Indian city of Bamrauli and peaks above 45 degrees C for “days on end” in New Delhi. A heat index, which accounts for heat and humidity, of 65 degrees C was reported in eastern India. While these temperatures were undoubtedly hot (4 to 6 degrees C above normal), it was also the duration of the heat wave that appears to stand out, lasting up to two weeks in some places.

Hourly data from the Punjabi Bagh station in New Delhi show consistently high peak temperatures from May 18-31 and increasingly higher nighttime temperatures, particularly from May 21-26. (Source: India Central Pollution Control Board)

Focusing on New Delhi given the availability of monitoring data, we pulled some data from India’s (CPCB) to confirm temperatures measured in the city, ranging from 22 to 43 degrees C at this particular monitoring station for the period shown.

Even more enlightening in terms of the link to health impacts is an exploration of the patterns of elevated temperatures within the city center or heat island. Previous studies have shown mixed results on whether an urban heat island exists in New Delhi—some showing a heat island effect of up to 3.5 degrees C compared to surrounding areas, and not identifying such a feature. Satellite-based measurements of the temperature of the land surface in New Delhi during the heat wave show a pronounced (several degrees Celsius) urban heat island at night. The heat island contributed to a lack of nighttime respite from the heat in urban centers, which likely increased the death toll.

Satellite-based measurements of nighttime land surface temperatures in New Delhi show an urban heat island of up to 5 degrees Celsius in the city center, indicating less nighttime cooling with implications for health. (Source: NASA MODIS Aqua)

Air pollution in India, both indoor and outdoor, is an additional stressor on health, responsible for an estimated per year. The weather patterns across India during this heat wave (some areas experiencing high humidity, others experiencing dry, dusty winds) were conducive to increased air pollution, including ozone and fine particulate matter (PM2.5), also likely adding to the death toll.

Ground- and satellite-based monitoring provides a sense of the levels of PM2.5 in New Delhi during the heat wave. Data from CPCB for two monitors in New Delhi show 24-hour average PM2.5 concentrations between 50 and 235 micrograms per cubic meter during the heat wave. For reference, the World Health Organization (WHO) has established PM2.5 of 25 micrograms per cubic meter for the 24-hour mean and 10 micrograms per cubic meter for the annual mean. The CPCB measurements align with satellite measurements of light scattering that are used as a surrogate for PM2.5 concentrations, which suggest a high concentration of PM2.5 in New Delhi from May 27 to 29. Similarly, a CPCB monitor of ozone in New Delhi showed elevated levels from May 27 to June 4, with an 8-hour mean as high as 267 micrograms per cubic meter on May 27, compared to the WHO guideline of 100 micrograms per cubic meter for an 8-hour mean.

Ground-based measurements of fine particular matter show unhealthy concentrations within many parts of the New Delhi for the entire period shown, but especially on May 28. (Source: India Central Pollution Control Board)

The populations that were reported to perish in highest numbers were the very poor, elderly, outdoor laborers and homeless, likely with pre-existing health problems and a lack of access to relief. In India, extreme poverty is especially visible through the prevalence of slums in urban centers, serving as home to millions in New Delhi, with limited electricity and potable water. Informal workers lack protection and are forced to work in the hottest conditions. In climate terms, these are sensitive populations. Impacts to infrastructure (e.g., electricity shortages, buckling roads) also impact sensitive populations relying on associated services such as from hospitals.

Satellite measurements of light extinction and scattering are used to estimate fine particle concentration. Here, the New Delhi area is circled which shows high Aerosol Optical Depth values (unitless) for May 27-29, 2015. (Source: NASA Terra MODIS, via Giovanni Interactive Visualization and Analysis)

Higher adaptive capacity would allow the most vulnerable populations to take measures to protect themselves, although the adaptive capacity of impoverished populations can be limited. At a community level, local meteorological services predicted this heat wave and communicated warnings to the public. Efforts were made to deliver water to those most in need; electricity shortages, although not necessarily worse than normal, were also experienced by some populations. The degree to which the targeted assistance reached the most sensitive populations is unclear. Media reported that many workers were unable to avoid outdoor work as advised, and that others did not believe the severity of the forecast.

The death toll of 2,500 indicates that there is a need to assess heat waves, the contribution of other stressors such as air pollution, and find ways to build resilience. Heat waves in India to become longer, more frequent, and more intense, due to climate change. The question then becomes where and how to focus additional climate adaptation resources to have the greatest positive impacts.

Exposure can be reduced through urban greening to reduce the urban heat island, implementing protection for informal workers, and controlling other stressors such as air pollution. Adaptive capacity can be boosted through more effective cooling centers with reliable electricity and cool roof construction. A project in the western Indian city of Ahmedabad tested out some of these approaches, focused on developing and training citizens and healthcare workers on heat stress. An example of a win-win solution would be to outfit cooling centers with rooftop solar panels, providing reliable electricity during typically sunny heat waves without increasing fossil fuel-based emissions that contribute to climate change and harm human health.

As Earth's Spin Slows, Clocks Get Another Leap Second

The history of the leap second reveals a curious pattern of decreasing frequency since its adoption 43 years ago

By | |

Due to a complex interplay of Earth’s and the moon’s gravitational fields, our planet’s rotation has gradually slowed over the millennia. It hasn’t been the designated length of one solar day—the time it takes Earth to make a full rotation, or slightly more than 86,400 seconds—since about 1820.

As a result, our global standard of time, known as Coordinated Universal Time, or UTC, occasionally becomes misaligned with UT1—the marker used to measure the actual length of one mean solar day. UT1 is determined using very long baseline interferometry (VLBI), a technique that relies on signals from extremely distant quasars to measure Earth’s precise orientation in space. In 1972 a policy to add a small unit of time, called a leap second, to UTC was implemented to correct the minute discrepancies found using such precise measurements.

For reasons that are not entirely clear to scientists, however, the rate at which Earth’s rotation slows is variable, so leap seconds must be added with unpredictable frequency. In the first few decades following the adoption of the leap second approach, UTC adjustments were made about once a year, but today’s leap second is only the fourth since 1999.

see also:

A Top Chef's Recipes for Eating Invasive Species

What's the best way to control ecological pests? Feed them to the world's greatest predator—us

By THIS IS A PREVIEW.to access the full article.Already purchased this issue?

Dinner is served: Asian shore crabs have spread rapidly since their introduction on the U.S. East Coast nearly three decades ago. Here they are served on a “plate” of invasive wakame seaweed. 

My restaurant, called Miya's Sushi, is just a few miles from Long Island Sound in New Haven, Conn. We have made it our goal to return our cuisine to the roots of sushi, meaning simply to use what we have available where we live. Too often what we find now are invasive species—unwanted plants and animals humans have introduced to ecosystems. Nationwide, invasive species such as the wild boar and Asian carp are destroying farms and fisheries, causing economic damage that has been estimated at $120 billion a year.

Our solution? Eat them. By collecting invasive seafood on shellfish beds, for instance, we basically provide a free weeding service. I also hope to convince the world that these invasives can be delicious, if you get into the right mind-set.

THIS IS A PREVIEW.to access the full article.Already purchased this issue? Buy Digital Issue$9.99 You May Also Like

The Birth of the Great GMO Debate

Intoxicating: The Science of Alcohol

Scientific American Archive Single Issue

Eating to Live

Will iPhones Change Medicine--by Turning Us All into Subjects?

New software allows researchers to finally capture the powerful health data generated by our smartphones

By | |

For a recent breast cancer study, epidemiologist Kathryn H. Schmitz of the University of Pennsylvania sent out 60,000 letters—and netted 351 women. Walking each participant through the paperwork took 30 minutes or more. Such inefficient methods of finding test subjects have been the norm for medical research.

Yet there's a wealth of data out there from the billion smartphones and 70 million wearable health trackers we buy every year. Their sensors generate terabytes of data every day about our activity, sleep and behavior. Those data would be fantastically useful to medical investigators—if only they could get at them.

For the first time, there's a way. It's free software from Apple called ResearchKit.

Research Kit lets researchers build apps to do the recruitment and data collection for them. You, the participant, know exactly who's getting this information, and you can opt out of any part at any time. The data go directly to the research institution; Apple has no access.

These apps can incorporate both self-reported data (“How are your symptoms today?”) and information from the phone's microphone, camera, motion sensor, GPS, and so on. So instead of providing updates once every six months, you're generating data hundreds, if not thousands, of times a day.

Before ResearchKit's release in April, Apple worked with leading institutions to develop the first wave of five apps. Cardiologist Michael McConnell and a team at the Stanford University School of Medicine, for example, developed MyHeart Counts, an app for monitoring cardiac health. It tracks your activity (using the phone's motion sensors) and asks you to take a walking test every three months. The app attempts to correlate activity, fitness and risk factors over time; eventually it gives you personalized suggestions—something else traditional studies don't usually do.

Within the first 24 hours, 10,000 participants signed up for the study.

“ResearchKit solves a number of the current challenges to clinical research,” McConnell told me. With it, you can recruit more people, bring costs down and allow for better sharing of research data, he said.

Eric Schadt, a geneticist at the Icahn School of Medicine at Mount Sinai, developed an app called Asthma Health. It surveys you about your condition each day and correlates your responses with your local weather, pollution and pollen counts (via your phone's GPS). Within 72 hours, 5,000 asthma sufferers had enrolled—a number, Schadt says, that would have taken him years to amass in the old days. Other apps developed before the release include GlucoSuccess (for monitoring diabetes), mPower (for Parkinson's disease) and Share the Journey (for breast cancer). They are all free. You can participate in the latter three studies even if you don't have the disease; your data are helpful as controls.

This may all sound wonderful, but what's in it for Apple?

Your first guess might be: “To sell more iPhones, of course.” Except that here's the best part: Apple has made ResearchKit open source. It's free to anyone—even Apple's rivals, such as Google or Samsung—to use, modify or co-opt.

The ResearchKit idea seems promising. But it's worth pointing out that the reliance on a smartphone limits the participant pool to people who have one. Studies that require body scans, fluid samples or hospital-grade precision are off the table, too.

But compared with in-person and even Web-based studies, these apps can be far more present and easier to stick with, and they can generate more kinds of useful data. Studies that used to be slow, small and local can now be fast, huge and global. And that could mean better health and longer lives for us all.

see also:

The Anesthesia Dilemma

The game is a contemporary of the original Nintendo but it still appeals to today’s teens and lab monkeys alike—which is a boon for neuroscientists. It offers no lifelike graphics. Nor does it boast a screen. Primate players—whether human or not—are simply required to pull levers and replicate patterns of flashing lights. Monkeys get a banana-flavored treat as a reward for good performance whereas kids get nickels.

But the game's creators are not really in it for fun. It was created by toxicologists at the U.S. Food and Drug Administration in the 1980s to study how chronic exposure to marijuana smoke affects the brain. Players with trouble responding quickly and correctly to the game’s commands may have problems with short-term memory, attention or other cognitive issues. The game has since been adapted to address a different question: whether anesthetics used to knock pediatric patients unconscious during surgery and diagnostic tests could affect a youngster's long-term neural development and cognition.

Despite 20 years’ worth of experiments in young rodents and monkeys, there have been few definitive answers. To date, numerous studies suggest that being put under with anesthesia early in life seems somehow related to future cognitive problems. But whether this association is causal or merely coincidence is unclear.

Researchers do know that the young human brain is exceptionally sensitive. When kids are exposed to certain harmful chemicals in their formative years, that experience can fundamentally alter the brain’s architecture by misdirecting the physical connections between neurons or causing cell deaths. But unraveling whether anesthetics may fuel such long-term damage in humans remains a challenge.

The connection does seem plausible. Anesthetics are powerful modulators of neurotransmission, or communication between neural cells, so the idea that early exposure to these chemicals may alter brain development does not seem far-fetched. Moreover, anesthesia exposure in animals has been linked to long-term learning and memory problems for almost all commonly used anesthetics.

Merle Paule, director of the division of neurotoxicology at the FDA’s National Center for Toxicological Research, has spent decades studying how a variety of chemicals affect animals. Four years ago he and his colleagues that when rhesus monkeys are put under with —an anesthetic sometimes used for kids during short, painful procedures—it is associated with lasting damage to the brain as compared with control group monkeys that were not exposed. When the monkeys were five or six days old, they were put under using ketamine for a 24-hour period. The anesthetized monkeys, as a group, subsequently performed worse than control monkeys in tests on learning and discriminating by color and position. Some three years after that exposure the ketamine monkeys were often unable to select as many levers per second compared with the control animals. The differences, however, were relatively subtle and perhaps would not make much of a difference in the lives of individual monkeys. Yet on a larger level, because the subjects performed slightly worse than the controls, it gives researchers and clinicians pause. And still, seven years after their ketamine treatment, those monkeys continue to show below-normal brain function, Paule says.

What’s more, his team also showed in separate work that similarly exposed monkeys suffer more brain-neuron deaths. Most recently Paule’s team has found in preliminary work that when monkeys were put under with a mix of isoflurane and nitrous oxide—similar to what is often used in young humans—just one eight-hour period of anesthesia was linked with long-term development and learning issues in those nonhuman primates. But translating that finding to humans is not perfect: Pediatric surgery in humans rarely takes that length of time.

The new monkey work has not yet been submitted to a peer-reviewed publication but Paule presented preliminary results during a recent FDA Science Forum open to the public in May 2015 at the agency’s offices in Silver Spring, Md. Based on these kind of findings, Paule says, researchers need to explore if there is a harm threshold for each anesthetic regimen used in humans and determine if there is anything that can be done to ameliorate or prevent the adverse effects already seen in nonhuman primates.

In humans, a growing body of work is already suggesting there may be cause for concern. One retrospective study published in in 2011 found that children who had multiple anesthesia exposures before two years of age were twice as likely (compared with those who were not exposed) to be diagnosed with a learning disability—even when overall health was taken into account. Children who had only a single course of anesthesia, however, did not exhibit elevated levels of such disorders.

There are dueling research findings, however. Another study, published in in 2012, found that when children under three years old had even one surgical procedure that required general anesthesia, those children appeared to be more likely to have difficulties with abstract reasoning and language by age 10.

Despite conflicting results like these, groups including the FDA and American Academy of Pediatrics decided the same year there was enough evidence to endorse a which stated that “increasing evidence…[now] suggests the benefits of these agents should be considered in the context of their potential to cause harmful effects.” Still, their statement stopped short of recommending avoiding anesthesia all together. Instead, it said that in the absence of conclusive evidence it would be unethical to withhold sedation and anesthesia when necessary.

Since then additional study findings have heightened concerns. Another report, published in this month, found that children under four years of age who had been under general anesthesia for an average of 37 minutes tended to score lower as a group on listening comprehension and performance IQ tests than those who were not exposed.

Such cognitive deficits in the anesthetized kids were also associated with brain changes in the occipital cortex and cerebellum. Yet, like the ketamine monkey studies, these types of deficits may not be significant in the daily lives of children. “Maybe scoring three, four or five points worse on IQ tests may not mean much for an individual, but if you lower the IQ in all the kids that have anesthesia exposure early in life, that could put a big burden on society in general,” says lead author Andreas Loepke, a professor of clinical anesthesia and pediatrics at the University of Cincinnati College of Medicine.

But is it the anesthesia that fueled future issues or might that exposure simply be a stand-in for some other larger problem affecting these children—say sicker kids needed surgery and went on to have future cognitive issues stemming from those health problems? Perhaps. “Even if you see an association, you don’t know if it’s anesthesia," says David Warner, a professor of anesthesiology at Mayo Clinic who is overseeing new long-term analysis of children. "With surgery other things like inflammatory response could perhaps cause issues and anesthesia may just be a marker, so that’s why a study like this can never be definitive,” he says. “We can never overcome that limitation but we are trying to account for variations.”

That’s where the game comes into the picture. For the past three years his team has worked on an observational study that aims to explore how such anesthesia use relates to future learning disabilities. The same mechanized light game played by monkeys has been getting almost daily play from kids at Mayo who are participating in the research. If the research team can find anything in kids consistently affected from these exposures, say specific aspects of learning and memory and specific brain changes, that would be an important win for the field.

The kids are doing more than gaming. They each come in for one four-hour testing session at Mayo. One hour is spent on the video game. (Kids typically win around $5 in nickels alongside the $100 they are paid for participating in the experiment.) The other three hours are spent completing a battery of various tests involving memory recall, card sorting and other widely accepted psychological tests. The lab, on average, sees one study child each workday and Warner expects to complete the study in spring 2017. Some of the kids had multiple anesthesia exposures or had experience with various anesthesia chemicals so the study may help shed light on differences there, too, or if there may be differences by sex. (Young children are usually completely anesthetized and not just sedated with lower doses of drugs for most medical procedures, which is another reason why their exposure levels may be high.)

The study, like others that came before it, is observational rather than the ideal gold standard study where patients can be randomized to specific treatments. But the Mayo work can still help answer some as-yet unsettled questions. Warner believes his study is attractive because it will use methods to evaluate kids similar to those the FDA already used in monkeys, which will allow for direct comparison of the primate findings with human data. They both use the same test game with similar rewards—although children learn to play after watching a short instructional video whereas monkeys need to be extensively trained.

The Mayo group has been following a group of middle and high school–aged teens who had general anesthesia before three years of age and comparing them with children who did not undergo anesthesia at that age. The control group is matched by birth weight, gestational age (for example, if they were born prematurely), parental education levels and if they, too, would have been likely to receive anesthesia but never did—say they were ill but their parents elected to postpone surgery because the condition was not life-threatening.

Yet even with these results it will still be murky what to do next. Only a tiny fraction of kids—in the single digits—are put under at a young age. But those numbers as a whole shake out to mean that at least half a million children under three years old are exposed to anesthetic agents each year.

Many of these surgeries are unavoidable. They treat life-threatening illnesses, avert serious health complications or substantially improve quality of life. The most common type of birth defect—congenital heart defects affecting the structure of an infant’s heart and its function—is one such example. About one in four babies born with heart defects need surgery or other procedures during the first year of life.

And the same chemicals used for surgical anesthesia are also used to anesthetize kids during nonsurgical procedures such as MRI scans and CT scans to ensure patients do not move. “I think if the kids need these tests and they need to hold still for these tests, then we have to use the drugs,” Loepke says. “We are between a rock and a hard place there because if the kid doesn’t have these diagnostic tests and we don’t know what’s wrong with the kid, then the kid may suffer more because we didn’t figure out what was needed.” For his part, Warner says he hopes his team’s findings may prompt future research into alternative anesthesia formulas or the development of drugs to boost brain health in the aftermath of surgery.

One area researchers may explore, for example, is if behavioral therapy to give kids more stimulation following surgery may offset anesthesia’s effects—something that has shown some promise in rodents.

To truly confirm the link between anesthesia and deficits, however, a randomized study would need to be done. One such work is already underway, headed up by an Australian researcher. It compares infants undergoing hernia repair under general anesthesia versus those getting the surgery while they are anesthesized only in a specific region. Then the kids undergo neurocognitive testing at age five. Results from that study are expected in the next couple years. But for now doctors and researchers are carefully watching for the results from the Mayo study. As that research team doles out bags of nickels, parents and physicians are banking on a big return.

see also:

Pink Salmon Struggle as Freshwater Becomes Acidic

Pink salmon are providing researchers with sobering hints to how carbon dioxide-induced acidity could affect freshwater fish species by the end of the 21st century.

A study  yesterday in  showed that early exposure to high levels of CO2 during the larval stage of development had significant negative effects on the fish’s size, metabolism and ability to sense threats in their environment.

The study was among the first to look at how different CO2 levels could affect fish larvae in fresh water, according to the lead author, Michelle Ou, a former master’s student at the University of British Columbia in Vancouver.

“We didn’t actually expect to see so many effects,” she said. “We were just poking around to see what we could find.”

Pink salmon seemed like a good species to start with. Not only are the fish abundant and economically important, but they also serve as a keystone species in marine, freshwater and terrestrial ecosystems, according to the researchers. Although pink salmon spend their adulthood in the open ocean, their first weeks of life are in freshwater streams. Once they have matured from larvae to fry, the fish leave the streams where they hatched and swim to the open ocean. Later, as adults, the fish will return to the same streams to spawn.

Ou and her colleagues at UBC created an experiment to test how fish were responding not only to ambient CO2 concentrations but also to acidity levels expected by 2100.

After obtaining salmon embryos from a hatchery, the researchers transferred them into freshwater flow-through tanks with either ambient, high or variable concentrations of CO2. After 10 weeks, they tested the baby fish to see whether or not their development had been affected by the different conditions.

They found that not only were they smaller and lighter, but the fish’s senses were also impaired. The pink salmon larvae were more bold around new objects and did not seem to be afraid of alarm cues in the water that would normally prompt fish to flee.

Weight loss and impaired navigation

“Think of it as a smell fingerprint of their natal stream, and they use that to find their way home,” she said.

While pink salmon are less particular about where they spawn than other salmon species, the research suggests that higher levels of CO2 could eventually prevent the fish from finding their way to their natal streams if they are unable to adapt, she said.

Once the fish had reached the age at which they would normally swim to the open ocean, the researchers transferred the fish into saltwater tanks that had either the same or increased levels of CO2-induced acidification.

When the fish grew up in fresh water and seawater with high concentrations of CO2, they lost weight at double the rate of fish that were only exposed to salt water with higher CO2 levels. Their ability to take up oxygen also went down by 30 percent, according to the study.

The findings showed that the fish and freshwater ecosystems may be more vulnerable to rising levels of carbon dioxide than previously thought, though the researchers don’t really know why carbon dioxide is having this effect.

A dearth of research

Previously, the scientific community believed that the ocean was so well buffered that higher levels of atmospheric CO2 would have little impact on marine life. Now, researchers are struggling to figure out how conditions have changed, since there is very little data to create a base-line comparison, he said.

Part of the reason it took longer to recognize the impact of CO2 is because adult fish tend to be more capable of handling higher levels of acidity, said Colin Brauner, a zoology professor at UBC and co-author of the study.

“People have looked at CO2 exposure in adult fish for a long time. If you expose it to 10 times the highest concentration used in our experiment, the fish don’t have a problem. Their gills can pump out acid, so their blood stays stable. So people thought fish would be just fine,” Brauner said.

By contrast, recent studies in tropical fish have shown that fish larvae experience quite large effects from increased CO2, he said.

Very little research has focused on freshwater species, because the conditions tend to be much more variable between streams, lakes and rivers. However, more studies on how CO2 affects fish in these habitats are needed because about 40 percent of fish species live in fresh water, Ou said.

This new research suggests that the effects of CO2 on larval fish may be broader than previous research had shown, according to Brauner.

“It may be during that early development that all fish are affected in a similar way. We don’t know, but most of what we see in developing salmon are seen in developing tropical fish,” Brauner said. “If the mechanism is the same, it could have a broad effect.”

www.eenews.net

see also:

Oddball Black Hole May Have Cosmic Cousins

SS 433 is a ravenous black hole that sucks the matter off its companion supergiant star like some sort of cosmic vampire—and it’s a messy eater. To date, SS 433 has been the only confirmed instance of a phenomenon known as “supercritical accretion” in which the black hole’s gluttonous stardust scarfing results in a hail of crumbs being thrust out into space. The viewpoint from Earth relative to SS 433 shows the object through a disk of material spiraling toward the black hole, so we do not see the its powerful x-rays in all their glory. But if the view were not obstructed by accreting material, SS 433 would appear as the brightest x-ray emitter in the galaxy.

Because it inhales material so ravenously, SS 433 has attained a singular status as an oddball in the Milky Way. Now, observations of exceedingly bright black hole binaries in nearby galaxies—other stars partnered with voracious black holes—suggest that these astronomical pairs may be up to the same thing. The extragalactic binaries are known as ultraluminous x-ray sources (ULXs) and this new work reveals that they are probably also superaccreting objects.

The ULX enigma

As a rule, the faster black holes eat, the brighter they shine—but there is an upper limit on their luminosity. At a certain point, the pressure of the radiation blazing off the accretion disk is strong enough to counteract the black hole’s gravitational pull, so any excess in-falling material is blasted back out into space. This turning point is called the Eddington limit, which states the more massive a black hole is the stronger its gravity and the brighter it can get before it reaches its Eddington limit.

If ULX black holes were only as big as the black holes in the Milky Way, they should have reached their Eddington limits before getting as bright as they are. So one explanation for ULX luminosity might be that these objects contain “intermediate-mass black holes” at least a thousand times the mass of the sun—not as enormous as the supermassive black holes in the cores of galaxies, but much bigger than the stellar-mass black holes that pepper the Milky Way.

The alternate explanation for ULX brightness is that they are relatively small black holes, upward of 200 solar masses, which ingest material so voraciously that they have reached “supercritical accretion.” Basically, they cheated the Eddington limit and kept getting brighter. “Now, this is not as daft an idea as it might seem,” says Tim Roberts, an astrophysicist at Durham University in England, who was not involved with this study. The existence of the Eddington limit hinges on the assumption that material falls toward the black hole from all directions. But since real-life black holes wrap material around themselves in disks, it is possible for them to keep gobbling matter even after the outward push of radiation should have stemmed their intake. As black holes reach their Eddington limits, their disks become hot and bloated. The matter on the interior of the doughnut-shaped accretion disk is not blasted away by the radiation emanating from the disk’s surface, so it gets sucked into the black hole. Meanwhile radiation pressure blows away outer layers of the disk in a powerful wind. Scientists see this kind of outflow from SS 433.

Since the turn of the millennium astronomers have observed ULXs in hopes of determining which of the two theories describes the true nature of these strange objects, but x-ray observations alone have not settled the debate. So a group of scientists from the Special Astrophysical Observatory in Russia and Kyoto University in Japan investigated the visible light emitted by ULXs, hoping these alternate wavelengths might provide new insight. The team used the Subaru telescope in Hawaii to get optical spectra of the four nearest ULXs. Their work was published in on June 1. ( is part of Nature Publishing Group.)

SS 433 joins an eccentric family

The scientists determined that ULXs are similar to SS 433 by examining features of their visible light called “emission lines.” Such data give insight into the chemical composition of celestial objects and how these particles are moving. The scientists found that the relative strengths and breadth of the hydrogen and helium emission lines in ULX spectra could be explained as originating from different regions in a dense, outward flowing wind—a telltale sign of superaccretion. SS 433 exhibits similar features because of its own disk wind. The of SS 433 is also in the same range as the ULXs observed in this study, which lends further support to the idea of their kinship. The scientists concluded that ULXs are most likely small black holes that have surpassed the Eddington limit, like SS 433.

Lingering uncertainties

Astrophysicist David Cseh at Radboud University Nijmegen in the Netherlands, who was also not involved with this study, agrees with Roberts. Cseh points out that astronomers have spotted at least one ULX that pulsates—that is, it periodically increases and decreases its brightness—which can only be explained by the presence of a neutron star rather than a black hole. “So, for those of us that work in ULXs, there is still plenty of mystery left,” Roberts says. The longstanding debate over the nature of ULXs might not be over yet, but it looks like one weird black hole binary might finally fit in somewhere.

see also:

Pre-Crastination: The Opposite of Procrastination

a college student studying

Is pre-crastination — exhibited by college students, bill payers, e-mailers, and shoppers — a symptom of our harried lives? 

Procrastination is a well-known and serious behavioral problem involving both and implications. Taxpayers commonly put off submitting their annual returns until the last minute, risking mathematical errors in their frenzy to file. Lawmakers notoriously dawdle and filibuster before enacting sometimes rash and ill-advised legislation at the eleventh hour. And, students burn the midnight oil to get their term papers submitted before the impending deadline, precluding proper polishing and proofreading. For these reasons, we are cautioned not to procrastinate:

However, the opposite of procrastination can also be a serious problem — a tendency we call “pre-crastination.” Pre-crastination is the inclination to complete tasks quickly just for the sake of getting things done sooner rather than later. People answer emails immediately rather than carefully contemplating their replies. People pay bills as soon as they arrive, thus failing to collect interest income. And, people grab items when they first enter the grocery store, carry them to the back of the store, pick up more groceries at the back, and then return to the front of the store to pay and exit, thus toting the items farther than necessary. Familiar adages also warn of the hazards of pre-crastinating:

We first found striking evidence of pre-crastination in a laboratory exploring the economics of effort. College students were asked to carry one of a pair of buckets: one on the left side of a walkway and one on the right side of the same walkway. The students were instructed to carry whichever bucket seemed easier to take to the end of the walkway. We expected students to choose the bucket closer to the end because it would have to be carried a shorter distance. Surprisingly, they preferred the bucket closer to the starting point, actually carrying it farther. When asked why they did so, most students said something like, “I wanted to get the task done as soon as possible,” even though this choice did not in fact complete the task sooner.

Nine experiments involving more than 250 students failed to reveal what might have been so compelling about picking up the nearer bucket. Although some hidden benefit may await discovery, a simple hypothesis is that getting something done, or coming closer to getting it done, is inherently rewarding. No matter how trivial the achievement, even something as inconsequential as picking up a bucket may serve as its own reward.

Is pre-crastination — exhibited by college students, bill payers, e-mailers, and shoppers — a symptom of our harried lives? The other from our laboratories suggests it is not: that experiment was done with pigeons. The birds could earn food by pecking a touchscreen three times: first, into a square in the center of the screen; second, into the same square or into a square that randomly appeared to the left or right of it; and third, into a side square after a star appeared within it. Critically, food was given after the final peck regardless of whether the second peck struck the center square or the side square where the star would be presented. The pigeons directed their second peck to the side square, hence moving to the goal position as soon as they could even though there was no obvious or extra reward for doing so. Thus, the pigeons pre-crastinated.

Finding pre-crastination in the pigeon is particularly important because the evolutionary ancestors of pigeons and people went their separate ways 300 million years ago. Following a popular line of thinking in comparative psychology, the fact that both pigeons and people pre-crastinate suggests that this behavioral tendency may have emerged even earlier in phylogeny.

Why would our evolutionary kin have pre-crastinated, and why do we humans and our pigeon contemporaries do so now? It is possible, as suggested above, that pre-crastination amounts to grabbing low-hanging fruit. If grain is nearby or if a bucket is close at hand, then it may be best to get it while it’s available. Another explanation is that completing tasks immediately may relieve working memory. By doing a task right away, you don’t have to remember to do it later; it can be taxing to keep future tasks in mind. . Yet, we doubt this is the whole story. Lifting a bucket doesn’t tax working memory very much, and it’s not obvious why directing the second peck to the future goal location would reduce the load on the pigeons’ working memory. A simpler account is that task completion is rewarding in and of itself. . All potential tasks, or their underlying neural circuits, compete for completion. Neural circuits for tasks that get completed may endure longer than neural circuits for tasks that don’t.

Another benefit of completing tasks as soon as possible is to provide us with as much information as possible about the costs and benefits of task-related behaviors. Trial-and-error learning is the most reliable way we discover what does and doesn’t succeed in everyday life. . Given these benefits, it may be better to gain experience from several trials than only a few.

Pre-crastination clearly adds to the challenge of coping with procrastination. Not only must procrastinators start sooner to begin tasks they’d rather defer, but they must also inhibit the urge to complete small, trivial tasks that bring immediate rewards just for being completed. The discovery of pre-crastination may suggest a way to counter the ills of procrastination. Break larger tasks into smaller ones. Such smaller tasks, when completed, will promote a sense of accomplishment, will bring one closer to the final goal, and, via trial-and-error learning, may support the discovery of even more adaptive or innovative ways of behaving.

see also:

Monday, June 29, 2015

Supreme Court Blocks EPA Rule on Mercury Emissions

The ruling, while a setback for the EPA, is unlikely to threaten its most significant climate change-related rule, the proposed Clean Power Plan, which would regulate carbon emissions from existing coal-fired power plants.

The Supreme Court ruled Monday that the U.S. Environmental Protection Agency overstepped its authority with a 2012 regulation limiting mercury emissions and other pollutants from coal-fired power plants because it refused to consider the costs involved in complying with the mandate.

In a  the court said that the EPA must consider the cost of an environmental regulation before deciding if it is “appropriate and necessary.” It left it to the EPA to decide how costs should be considered and sent the case back to the federal appeals court to decide whether the rule should remain in effect in the meantime.

The ruling, while a setback for the EPA, is unlikely to threaten its most significant climate change-related rule, the proposed Clean Power Plan, which would regulate carbon emissions from existing coal-fired power plants. If finalized in August, the Clean Power Plan is widely expected to force coal-fired power plants to shutter and drastically  across the U.S.

“The case is unlikely to be a significant setback in EPA’s efforts to regulate other forms of pollution from power plants,” , director of the Institute for Policy Integrity at New York University Law School, said in a statement. “Nothing in this decision would in any way call into question the legal legitimacy of the Clean Power Plan.”

Monday’s decision, Michigan v. EPA, involved a 2012 regulation known as the , which limited mercury emissions from coal-fired power plants under the Clean Air Act. The EPA estimated the standards would cost utilities $9.6 billion annually, but it refused to consider that cost when drafting the regulation because it believed the risks to public health and the environment posed by mercury emissions were too great.

In writing the majority opinion, however, Justice Antonin Scalia said the agency’s position was deeply flawed.

“It is not rational, never mind ‘appropriate,’ to impose billions of dollars in economic costs in return for a few dollars in health or environmental benefits,” Scalia wrote,, adding, “EPA must consider cost—including cost of compliance—before deciding whether regulation is appropriate and necessary.”

The court ruled that cost should be a major deciding factor at the earliest stages of writing a regulation—the point at which the EPA recognizes that pollution poses a risk to the environment and the public.

“By EPA’s logic, someone could decide whether it is ‘appropriate’ to buy a Ferrari without thinking about cost because he plans to think about cost later when deciding to upgrade the sound system,” Scalia wrote.

But the EPA concluded that the benefits of regulating mercury—totaling more than $80 billion annually—would far outweigh the cost of complying with the new standards, Justice Elena Kagan wrote in her dissent. “Those benefits include as many as 11,000 fewer premature deaths annually, along with a far greater number of avoided illnesses.”

She said the EPA took costs into account at multiple stages over the course of a decade of writing the rule.

“The Agency acted well within its authority in declining to consider costs at the opening bell of the regulatory process given that it would do so in every round thereafter,” Kagan wrote. “Indeed, EPA could not have measured costs at the process’s initial stage with any accuracy. And the regulatory path the EPA chose parallels the one it has trod in setting emissions limits, at Congress’s explicit direction, for every other source of hazardous air pollutants over two decades.”

The ruling doesn’t prevent the EPA from regulating mercury emissions, but requires the agency to factor in the cost of compliance. What happens next is up to the U.S. Court of Appeals for the D.C. Circuit, which will decide whether the mercury rule can stay in effect while the EPA considers the issue.

Despite Monday’s ruling, the EPA could find its defense of the Clean Power Plan bolstered because the Supreme Court’s decision undermines one of the coal industry’s biggest arguments against it, , an attorney specializing in energy and environmental cases.

The industry has claimed that the Clean Air Act prevents the federal government from regulating both carbon dioxide emissions and mercury from coal-fired power plants at the same time. Without the mercury rule, that argument could be undermined, Potts said.

“This opinion could have significant impacts for both the mercury rule and the Clean Power Plan,” Potts said. “I think unquestionably this is a good thing for the Clean Power Plan regardless what the D.C. Circuit does because it creates a defense for the EPA.”

Climate Central. The article was

see also:

Nearly 4 of 10 U.S. Kids Exposed to Violence

The interviewers asked about conventional crime, child maltreatment, peer and sibling abuse, sexual assault, indirect exposure to violence and witnessing violence to others, and Internet violence

June 29, 2015

|

By Kathryn Doyle

(Reuters Health) - Phone-based surveys show that nearly four of every 10 kids and teens in the U.S. were exposed to violence or abuse over the previous year, researchers have found.

"Children are the most victimized segment of the population," said study leader David Finkelhor of the Crimes Against Children Research Center at the University of New Hampshire in Durham.

"The full burden of this tends to be missed because many national crime indicators either do not include the experience of all children or don't look at the big picture and include all the kinds of violence to which children are exposed," Finkelhor told Reuters Health by email.

Compared to 2011, the violence rates appear to be stable, and certain kinds of violence exposure may be decreasing, he said.

While the rates are not going up, "the problem is that there is still way too much," he said.

As part of the National Survey of Children's Exposure to Violence, 4000 children aged 17 and younger were interviewed in 2013 and 2014. If the child was between age 10 and 17, he or she was interviewed over the phone. An adult caregiver answered questions for younger children.

The interviewers asked about conventional crime, child maltreatment, peer and sibling abuse, sexual assault, indirect exposure to violence and witnessing violence to others, and Internet violence. If the child had been exposed to any of these events over the previous year, the interviewers also asked about who committed the violence, weapons and injuries.

About 37 percent of kids had been physically assaulted over the previous year, and almost 10 percent were injured as a result, the researchers reported online June 29 in . Two percent of girls had been sexually assaulted or abused, including more than 4% of girls age 14 to 17.

About 15 percent had experienced maltreatment by a caregiver. Almost 6 percent had witnessed violence between their parents.

These numbers are similar to what's been found in previous studies in the U.S. and elsewhere, Dr. Andreas Jud of Lucerne University of Applied Sciences and Arts in Switzerland told Reuters Health by email. Jud was not part of the new study.

Most maltreatment incidents occur within the family, according to John Fluke, a child welfare scholar-in-residence at the University of Denver in Colorado.

In the social service population and in his own study, neglect is the predominant form of maltreatment, Fluke told Reuters Health by email.

"This is really complex and what is needed is some considerable effort to use surveillance data in targeted ways to help determine what prevention and treatment approaches are most effective for specific populations," Fluke said.

"Violence and abuse in childhood are big drivers behind many of our most serious health and social problems," Finkelhor noted. "They are associated with later drug abuse, suicide, criminal behavior, mental illness and chronic disease like diabetes."

Parent education and support programs have been shown to prevent family abuse, and school-based programs can reduce bullying while dating violence prevention programs can help teens, Finkelhor said.

"The challenge is to get children and families access to these programs and make such education more comprehensive and integrated into the curriculum," he said.

SOURCE: http://bit.ly/1SXWxwh

see also:

U.S. Congress Moves to Block Human-Embryo Editing

The US House of Representatives is wading into the debate over whether human embryos should be modified to introduce heritable changes. Its fiscal year 2016 spending bill for the US Food and Drug Administration (FDA) would prohibit the agency from spending money to evaluate research or clinical applications for such products.

In an unusual twist, the bill—introduced on June 17—would also direct the FDA to create a committee that includes religious experts to review a forthcoming report from the US Institute of Medicine (IOM). The IOM's analysis, which considers the ethics of creating , was commissioned by the FDA.

The House legislation comes during a time of intense debate on such matters, sparked by the announcement in April that researchers in China . The US National Institutes of Health (NIH)  that a 1996 law prevents the federal government from funding work that destroys human embryos or creates them for research purposes.

Privately funded research on editing the human germline remains legal in the United States. But the pending House bill seeks to make it harder to test embryo editing in clinical trials. A provision in the legislation would prevent the FDA from using federal funds to evaluate or permit research that involves either viable embryos with heritable genetic modifications, or sperm or eggs that could be used to create such an embryo.

“This step seems dumb—or ill-advised,” says Hank Greely, a bioethicist at Stanford University in California. It might also be premature because the FDA has not shown any indication that it would approve such research. And such a ban would not apply to the type of research that the Chinese scientists performed, because the embryos they used were not viable.

Moreover, the provision—as it stands—could backfire. Applications to the FDA to investigate a potential drug are approved automatically unless the agency moves to block them. But Patricia Zettler, a law professor at Georgia State University in Atlanta and a former FDA attorney, says that blocking an application would require the use of public funds—which the House bill would prohibit.

Greely suspects that the Republican majority in Congress “is trying to throw a (cheap) bone to some of its supporters; regrettable (to me), but not important”. The House appropriations committee, which drafted the FDA spending bill, did not respond to requests for comment.

Although the bill has been approved by a subcommittee, it would need to win approval from the full House, the US Senate and US President Barack Obama to become law. The provisions that would affect the FDA are contained in a report that accompanies the bill and has not yet been publicly released.

Watch the watchershas been considering the implications of modifying human embryos for some time. Last year, it commissioned an IOM report on the ethical and social implications of “three-parent embryos”. These embryos could help women to avoid passing genetic diseases on to their offspring, because faulty mitochondria in the mother's egg are replaced with healthy mitochondria from another woman.

The FDA seems to be waiting for the IOM's peer-reviewed analysis, due this winter, before it decides whether to permit clinical trials on mitochondrial replacement.

The House legislation calls for another layer of review. It would direct the FDA to establish “an independent panel of experts, including those from faith-based institutions with expertise on bioethics and faith-based medical associations” to review the IOM report once released. The panel would have 30 days to evaluate the report and provide its own recommendations to the House Appropriations Committee.

William Kearney, a spokesman for the IOM's parent organization, the US National Academy of Sciences (NAS) in Washington DC, declined to comment on the House bill. But he says that the NAS has occasionally included religious specialists on its committees when appropriate. “We always strive to balance our committees with the expertise necessary to carry out the study in a scientific manner in order to produce an evidence-based report.”

In fact, the IOM committee that is evaluating mitochondrial transfer includes a bioethicist, James Childress, who teaches religious studies at the University of Virginia in Charlottesville.

But experts who have served on committees that were convened by the IOM or the NAS, say that the House bill's provisions are highly unusual. 

“It’s hard for me to understand what Congress thinks can be added by another layer of taxpayer-supported ethics reflection,” says Jonathan Moreno, a bioethicist at the University of Pennsylvania in Philadelphia. “You don’t have to be a faith-based bioethicist to recognize that there’s some global responsibility for modifying the human germline.”

Zettler says that Congress frequently orders the agency to include certain types of experts on independent advisory committees. But Zettler is not aware of any previous situations in which lawmakers mandated the participation of religious specialists, and she says that the purpose of such a requirement is unclear.

The FDA is charged with evaluating the safety and efficacy of medical products, but it is not allowed to let ethical and social implications of research influence its decisions — except to ensure that human subjects are protected in clinical trials.

Moreno worries that if the House bill becomes law, it could set a precedent for Congress to require other agencies to second-guess the NAS. “It is a signal that the culture wars aren’t dead,” he says.

see also:

Fact or Fiction?: Chocolate is Good for Your Health

The most hyped science story of the 21st century starts with a cocoa bean

By | |

Thousands of popular headlines over the past couple of decades have touted the supposed health benefits of —particularly dark chocolate (in moderation, of course). But every single one of the major studies on which those claims are based actually failed to prove any such connection. They weren't designed to—they are observational studies, whose main purpose is to identify interesting ideas that warrant closer, more rigorous investigation without wasting too much time and energy. You can blame traffic-hungry journalists (or their editors) for the specious headlines.

Really getting to the bottom of whether or not is good for you requires what's known as a randomized, double-blind, placebo-controlled trial. This is the most scientifically rigorous type of study researchers ever conduct and it's designed to separate honest-to-goodness real evidence from wishful thinking. As it happens, just such a randomized controlled trial got underway this spring. And no, you can't volunteer for it—unless you already participated in one of two other studies.

With 18,000 expected participants, the new study is big. It has to be because no one wants to wait decades for definitive results. Because the participants are older and thus at higher risk of suffering heart attacks and strokes, investigators should be able to collect enough data to determine whether or not the intervention is worthwhile over the course of about four years. Women are being recruited from the and male participants hail from the .

It's expensive. The budget is somewhere between $30 million and $60 million, which helps to explain why it's being sponsored by a trio of partners: the National Institutes of Health, Mars, Inc., and Pfizer. Investigators from the Brigham and Women's Hospital and the Fred Hutchinson Cancer Research Center are carrying out the actual study.

But test subjects will not be getting free samples of chocolate. Indeed, the study pills they'll be taking won't even taste like chocolate. That is because the researchers won't actually be testing chocolate. Instead, they will be studying the health benefits of certain plant-based substances called flavanols, which are found not only in chocolate but also in tea, fruits and vegetables. (There's also a section of the study that will evaluate the health benefits of multivitamins.)

Laboratory experiments suggest that the flavanols may help keep the insides of arteries nice and flexible—a characteristic that is known to protect the heart and brain over the course of a lifetime. But the process of fermenting, drying and roasting cocoa beans in order to turn them into chocolate destroys most of their original flavanol content.

Still, cocoa contains some unique combinations of flavanols that warrant a closer look, and the compounds in question have already undergone extensive safety testing. So Mars developed a proprietary process that preserves the flavanols, starting with how growers harvest the cocoa beans in the first place, says Hagen Schroeter, who is a nutrition researcher at Mars as well as the University of California, Davis.

All this helps to explain why the study, dubbed COSMOS (for Cocoa Supplement and Multivitamin Outcomes Study), is looking at flavanols derived from cocoa and not tea.

In addition to measuring the number of heart attacks, strokes and other cardiovascular ailments of its subjects, COSMOS investigators will look at whether the flavanol extracts help to lower blood sugar level or improve participant's scores on memory tests. The study will be large enough to detect a difference between the control and experimental arms of as little as 10 to 15 percent, says JoAnn Manson, chief of the Division of Preventive Medicine at Brigham and Women's and one of the COSMOS study leaders. A 25 percent difference, which Manson says is "feasible" to detect, would place flavanol's benefits for heart disease very nearly in line with that of a statin drug.

In any event, when the results are published several years from now, you can safely ignore any news items that say anything about chocolate's health benefits. If anything, it will be the flavanols—minus the extra sugar and fat that comes with chocolate—that will prove healthy. In the meantime feel free to eat chocolate (in moderation) because you like it—not because you hope it will make you live longer.

see also:

A Battle of the Sexes Is Waged in Genes of Humans, Bulls and More

human chromosomes

This is evidence that the genes are involved in meiotic drive, a somewhat mysterious biological process that subverts the standard rules of heredity. In it, a particular version of a gene — or in this case, an entire chromosome — manages to increase the frequency by which it is transmitted to the next generation.

New DNA sequencing data reinforce the notion that the X and Y chromosomes, which determine biological sex in mammals, are locked in an evolutionary battle for supremacy.

David Page, a biologist who directs the Whitehead Institute in Cambridge, Massachusetts, and his colleagues explored the Y chromosomes carried by males of several species, mapping stretches of mysterious, repetitive DNA in unprecedented detail. These stretches may signal a longstanding clash of the chromosomes.

Page presented the results last week at a meeting of the Society for the Study of Reproduction in San Juan, Puerto Rico. His team’s subjects included humans and other primates, a standard laboratory mouse, and a bull named Domino.

“This idea of conflict between the chromosomes has been around for a while,” says Tony Gamble, an evolutionary biologist at the University of Minnesota in Minneapolis. But the sequencing data from the bull’s Y chromosome suggests that the phenomenon is more widespread than previously thought, he adds.

The mammalian Y chromosome has long been thought of as a sort of genomic wasteland, usually shrinking over the course of evolution and largely bereft of pertinent information. Page’s work has  by revealing that it contains remarkable patterns of repeating sequences that appear dozens to hundreds of times.

But the structure of these sequences and precise measures of how often they repeat have been difficult to determine. Standard sequencing technologies often cannot distinguish between long stretches of genetic code that differ by a single DNA ‘letter’.

Letter by letter

The team sequenced many large, continuous stretches of the Y chromosome and carefully scrutinized the areas that looked as if they overlapped. They found that repeating structures make up about 24% of the accessible DNA in the human Y chromosome, and 44% of that of the bull.

And in the Y chromosome of the mouse, which is much larger than that of a human, repeating structures make up almost 90% of accessible DNA. The intricate patterns, which often contain palindromes — sequence that reads the same in forward and reverse order — carry three families of protein-coding genes. What the genes are doing — and how they got there — remains a mystery, however.

In mammals, the X and Y chromosomes emerged relatively recently from a regular pair of chromosomes before differentiating from one another. They share many of the structures that came from their ancestral source, but these repetitive regions seem to have come from somewhere else.

The repeated genes in the mouse Y chromosome do not resemble anything on the human Y chromosome, but they do have analogues on the mouse X chromosome. And in the mouse, human and bull, the repeated genes on Y and X are expressed in the male germ cells that eventually produce sperm.

A biological black box

How that works is unclear. Sperm carry an X or a Y chromosome; genes expressed in the testes, where the cells are produced, may influence which sperm will be more likely to successfully fertilize an egg.

Previous studies lend credence to this idea. A team led by geneticist Paul Burgoyne and collaborators at the MRC National Institute of Medical Research in Mill Hill, UK, found that mice with a partial deletion of the Y chromosome produce offspring with a female-skewed ratio. The researchers subsequently shifted offspring sex ratios in both directions by tinkering with the expression of these multicopy genes.

Of course, mice — in nature and in the lab — usually maintain even sex ratios. Failing to do so could harm species survival. So as these Y-promoting genes made copies of themselves, subsequent mechanisms evolved to suppress their selfish urges. Page’s results provide a way to explore that evolutionary history; the data on the bull genome suggest that the mouse X and Y may not be exceptions.

With further high-resolution sequencing data, researchers may find more support for genomic battles of the sexes and possibly uncover other surprises. “There’s this rich tapestry of what sexual chromosomes are capable of,” says Gamble.

see also: