• Wallabag.it! - Save to Instapaper - Save to Pocket -

    Minkowski Metric, Feature Weighting and Anomalous Cluster Initializing in K-Means Clustering

    This week on Journal Club session Deepak Panday will talk about a paper "Minkowski Metric, Feature Weighting and Anomalous Cluster Initializing in K-Means Clustering".


    This paper represents another step in overcoming a drawback of K-Means, its lack of defense against noisy features, using feature weights in the criterion. The Weighted K-Means method by Huang et al. (2008, 2004, 2005) [5, 7] is extended to the corresponding Minkowski metric for measuring distances. Under Minkowski metric the feature weights become intuitively appealing feature rescaling factors in a conventional K-Means criterion. To see how this can be used in addressing another issue of K-Means, the initial setting, a method to initialize K-Means with anomalous clusters is adapted. The Minkowski metric based method is experimentally validated on datasets from the UCI Machine Learning Repository and generated sets of Gaussian clusters, both as they are and with additional uniform random noise features, and appears to be competitive in comparison with other K-Means based feature weighting algorithms.


    Papers:

    Date: 2021/03/10
    Time: 14:00
    Location: online

    in UH Biocomputation group on October 03, 2021 02:01 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Fast radio bursts could help solve the mystery of the universe’s expansion

    Astronomers have been arguing about the rate of the universe’s expansion for nearly a century. A new independent method to measure that rate could help cast the deciding vote.

    For the first time, astronomers calculated the Hubble constant — the rate at which the universe is expanding — from observations of cosmic flashes called fast radio bursts, or FRBs. While the results are preliminary and the uncertainties are large, the technique could mature into a powerful tool for nailing down the elusive Hubble constant, researchers report April 12 at arXiv.org.

    Ultimately, if the uncertainties in the new method can be reduced, it could help settle the longstanding debate that holds our understanding of the universe’s physics in the balance (SN: 7/30/19).

    “I see great promises in this measurement in the future, especially with the growing number of detected repeated FRBs,” says Stanford University astronomer Simon Birrer, who was not involved with the new work.

    Astronomers typically measure the Hubble constant in two ways. One uses the cosmic microwave background, the light released shortly after the Big Bang, in the distant universe. The other uses supernovas and other stars in the nearby universe. These approaches currently disagree by a few percent. The new value from FRBs comes in at an expansion rate of about 62.3 kilometers per second for every megaparsec (about 3.3 million light-years). While lower than the other methods, it’s tentatively closer to the value from the cosmic microwave background, or CMB.

    “Our data agrees a little bit more with the CMB side of things compared to the supernova side, but the error bar is really big, so you can’t really say anything,” says Steffen Hagstotz, an astronomer at Stockholm University. Nonetheless, he says, “I think fast radio bursts have the potential to be as accurate as the other methods.”

    No one knows exactly what causes FRBs, though eruptions from highly magnetic neutron stars are one possible explanation (SN: 6/4/20). During the few milliseconds when FRBs blast out radio waves, their extreme brightness makes them visible across large cosmic distances, giving astronomers a way to probe the space between galaxies (SN: 5/27/20).

    As an FRB signal travels through the dust and gas separating galaxies, it becomes scattered in a predictable way that causes some frequencies to arrive slightly later than others. The farther away the FRB, the more dispersed the signal. Comparing this delay with distance estimates to nine known FRBs, Hagstotz and colleagues measured the Hubble constant.

    The largest error in the new method comes from not knowing precisely how the FRB signal disperses as it exits its home galaxy before entering intergalactic space, where the gas and dust content is better understood. With a few hundred FRBs, the team estimates that it could reduce the uncertainties and match the accuracy of other methods such as supernovas.

    “It’s a first measurement, so not too surprising that the current results are not as constraining as other more matured probes,” says Birrer.

    New FRB data might be coming soon. Many new radio observatories are coming online and larger surveys, such as ones proposed for the Square Kilometer Array, could discover tens to thousands of FRBs every night. Hagstotz expects there will sufficient FRBs with distance estimates in the next year or two to accurately determine the Hubble constant. Such FRB data could also help astronomers understand what’s causing the bright outbursts.

    “I am very excited about the new possibilities that we will have soon,” Hagstotz says. “It’s really just beginning.”

    in Science News on April 21, 2021 04:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A new technique could make some plastic trash compostable at home

    A pinch of polymer-munching enzymes could make biodegradable plastic packaging and forks truly compostable.

    With moderate heat, enzyme-laced films of the plastic disintegrated in standard compost or plain tap water within days to weeks, Ting Xu and her colleagues report April 21 in Nature.

    “Biodegradability does not equal compostability,” says Xu, a polymer scientist at the University of California, Berkeley and Lawrence Berkeley National Laboratory. She often finds bits of biodegradable plastic in the compost she picks up for her parents’ garden. Most biodegradable plastics go to landfills, where the conditions aren’t right for them to break down, so they degrade no faster than normal plastics.

    Embedding polymer-chomping enzymes in biodegradable plastic should accelerate decomposition. But that process often inadvertently forms potentially harmful microplastics, which are showing up in ecosystems across the globe (SN: 11/20/20). The enzymes clump together and randomly snip plastics’ molecular chains, leading to an incomplete breakdown. “It’s worse than if you don’t degrade them in the first place,” Xu says.

    Her team added individual enzymes into two biodegradable plastics, including polylactic acid, commonly used in food packaging. They inserted the enzymes along with another ingredient, a degradable additive Xu previously developed, which ensured the enzymes didn’t clump together and didn’t fall apart. The solitary enzymes grabbed the ends of the plastics’ molecular chains and ate as though they were slurping spaghetti, severing every chain link and preventing microplastic formation.

    vials of tap water containing new plastic filament before and after degradationFilaments of a new plastic material degrade completely (right) when submerged in tap water for several days.Adam Lau/Berkeley Engineering

    Adding enzymes usually makes plastic expensive and compromises its properties. However, Xu’s enzymes make up as little as 0.02 percent of the plastic’s weight, and her plastics are as strong and flexible as one typically used in grocery bags.

    The technology doesn’t work on all plastics because their molecular structures vary, a limitation Xu’s team is working to overcome. She’s filed a patent application for the technology, and a coauthor founded a startup to commercialize it. “We want this to be in every grocery store,” she says.

    in Science News on April 21, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Bursting Neurons Signal Input Slope

    This week on Journal Club session Volker Steuber will talk about a paper "Bursting Neurons Signal Input Slope".


    Brief bursts of high-frequency action potentials represent a common firing mode of pyramidal neurons, and there are indications that they represent a special neural code. It is therefore of interest to determine whether there are particular spatial and temporal features of neuronal inputs that trigger bursts. Recent work on pyramidal cells indicates that bursts can be initiated by a specific spatial arrangement of inputs in which there is coincident proximal and distal dendritic excitation (Larkum et al., 1999). Here we have used a computational model of an important class of bursting neurons to investigate whether there are special temporal features of inputs that trigger bursts. We find that when a model pyramidal neuron receives sinusoidally or randomly varying inputs, bursts occur preferentially on the positive slope of the input signal. We further find that the number of spikes per burst can signal the magnitude of the slope in a graded manner. We show how these computations can be understood in terms of the biophysical mechanism of burst generation. There are several examples in the literature suggesting that bursts indeed occur preferentially on positive slopes (Guido et al., 1992; Gabbiani et al., 1996). Our results suggest that this selectivity could be a simple consequence of the biophysics of burst generation. Our observations also raise the possibility that neurons use a burst duration code useful for rapid information transmission. This possibility could be further examined experimentally by looking for correlations between burst duration and stimulus variables.


    Papers:

    Date: 2021/04/23
    Time: 14:00
    Location: online

    in UH Biocomputation group on April 21, 2021 10:37 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Island of Dr Izpusua Belmonte

    Human-monkey chimeras arrive to solve the problem of organ shortage. Thank Juan Carlos Izpisua Belmonte, who is ready to cure all possible diseases and even the old age. With chutzpah and Cell on his side.

    in For Better Science on April 21, 2021 10:31 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Unmet Sexual Needs Can Leave People Less Satisfied With Their Relationship — But Having A Responsive Partner Mitigates This Effect

    By Emma Young

    “For better or worse, romantic partners usually have to rely heavily on each other to fulfil their sexual needs.” So begins a new paper that attempts to plug a gap in understanding sexual ideals — and what might buffer against dissatisfaction if reality doesn’t quite match.

    Sexual incompatibilities are not only common, but are difficult to resolve even with couples therapy, note Rhonda N. Balzarini at York University and colleagues in their paper in the Journal of Personality and Social Psychology: Interpersonal Relations and Group Processes. Despite this, there’s been only limited work to understand precisely what constitutes an individual’s ideal sex life. Earlier work has generally focused on narrow aspects, such as how often a person would ideally like to have sex, or on levels of sexual desire. For this new research, the team developed a broader, 30-item Sexual Ideals Scale, which asks about specific behaviours (“My partner engages in oral sex with me as much as I want my ideal partner to”, for example) but also about the importance of feeling safe and in love, or of dirty talk, for instance. 

    The first study involved 207 members of mixed-sex couples from Canada and the US who had been in a relationship for at least four months. Both members of each couple took part. As well as the new Sexual Ideals Scale, they independently completed measures of sexual satisfaction, relationship satisfaction, commitment to their partner and also their “sexual communal strength”. Someone high in sexual communal strength is more motivated to meet their partner’s needs “even when those needs are different from their own”. Such a person is likely to be perceived as being more responsive, even if their partner’s sexual ideals aren’t being met.

    This first study showed, unsurprisingly, that when people report unmet sexual ideals in their relationship, both they and their partner reported lower sexual and relationship satisfaction. However, among those with unmet sexual ideals, people with a sexually communal partner reported higher levels of both types of satisfaction. The results for men and women were very similar.

    The team also ran daily diary-based studies, which revealed that on days when people reported having more unmet sexual ideals than typical, both they and their partner reported lower levels of sexual and relationship satisfaction and commitment. There were long-term effects, too: more unmet sexual ideals over a three-week period were associated with reductions in both types of satisfaction for both partners three months later. The results couldn’t be explained by the participants’ reports of sexual frequency or levels of sexual desire.

    This study suggests that people hold ideals about their sexual relationship — and when these ideals are not met, there are negative consequences. However, again, the data suggested that having a sexually communal partner mitigated this. In a final experimental study, the team found that participants who’d been led to believe that their sexual ideals were not being met reported lower levels of both types of satisfaction only if they rated their partner as low for sexual communal strength, but not if this score was high. 

    People with sexually communal partners may not feel that their sexual ideals are being entirely met, but their partner’s behaviour may make this feel less of a problem, the researchers suggest. Perhaps their partner is supportive when declining their sexual advances, or more willing to compromise, or offer other forms of affection when they’re not interested in sex.

    Interventions aimed at addressing sexual compatibilities are scarce, the team notes. Since the new research suggests that if both members of a couple make more of an effort to be responsive, this could reduce or even overcome relationship difficulties caused by mismatched sexual ideals, this does at least suggest an approach to try. 

    The detriments of unmet sexual ideals and buffering effect of sexual communal strength.

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on April 21, 2021 10:23 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    CRL and East View Release Open Access Imperial Russian Newspapers | CRL

    "CRL and East View Information Services have opened the first release of content for Imperial Russian Newspapers (link is external), the fourth Open Access collection of titles digitized under the Global Press Archive (GPA) CRL Charter Alliance. This collection adds to the growing body of Open Access material available in the Global Press Archive by virtue of support from CRL members and other participating institutions. The Imperial Russian Newspapers(link is external) collection, with a preliminary release of 230,000 pages, spans the eighteenth through early twentieth centuries and will include core titles from Moscow and St. Petersburg as well as regional newspapers across the vast Russian Empire. Central and regional “gubernskie vedomosti” will be complemented by a selection of private newspapers emerging after the Crimean War in 1855, a number of which grew to be influential...."

    in Open Access Tracking Project: news on April 21, 2021 10:21 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The already tiny neutrino’s maximum possible mass has shrunk even further

    To understand neutrinos, it pays to be small-minded.

    The subatomic particles are so lightweight, they’re almost massless. They’re a tiny fraction of the mass of the next lightest particle, the electron. But scientists still don’t know exactly how slight the particles are. A new estimate from the KATRIN experiment, located in Karlsruhe, Germany, further shrinks the maximum possible mass neutrinos could have.

    The puny particles have masses of 0.8 electron volts or less, physicist Diana Parno reported April 19 at a virtual meeting of the American Physical Society. For comparison, electrons are more than 600,000 times as bulky, at about 511,000 electron volts. “Neutrino masses are tiny,” said Parno, of Carnegie Mellon University in Pittsburgh.

    The KATRIN experiment studies tritium, a rare form of hydrogen that decays radioactively, emitting an electron and an antimatter mirror image of the neutrino, an antineutrino. Measuring the energies of the electrons can reveal the masses of the antineutrinos that flitted away.  That’s because mass and energy are two sides of the same coin; a more massive neutrino would mean less energy could go to the electron in the decay

    A previous estimate from KATRIN, using a smaller amount of data, found that the neutrino’s mass was less than 1.1 electron volts (SN: 9/18/19). In the coming years, additional data should further squeeze the neutrino’s maximum possible bulk.

    Scientists still don’t understand why neutrinos are abnormally light (SN: 2/26/18). The origin of the particle’s mass remains mysterious: While most fundamental particles obtain their masses from interacting with what’s called the Higgs field — as revealed by the discovery of its particle manifestation, the Higgs boson, in 2012 (SN: 7/4/12) — neutrinos may get their masses in a different manner.

    in Science News on April 21, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘Unfair and unsubstantiated’: Journal retracts paper suggesting smoking is linked to lower COVID-19 risk

    A paper suggesting that smokers were significantly less likely than nonsmokers to contract Covid-19 has been retracted because the authors failed to disclose financial ties to … the tobacco industry.  The article, which appeared as a preprint and then as an “early view” in the European Respiratory Journal last July, came from a group at … Continue reading ‘Unfair and unsubstantiated’: Journal retracts paper suggesting smoking is linked to lower COVID-19 risk

    in Retraction watch on April 21, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Neutron stars may not be as squishy as some scientists thought

    Like a dried-up lemon from the back of the fridge, neutron stars are less squeezable than expected, physicists report.

    New measurements of the most massive known neutron star find that it has a surprisingly large diameter, suggesting that the matter within isn’t as squishy as some theories predicted, physicists with the Neutron star Interior Composition Explorer, or NICER, reported April 17 at a virtual meeting of the American Physical Society.

    When a dying star explodes, it can leave behind a memento: a remnant crammed with neutrons. These neutron stars are extraordinarily dense — like compressing Mount Everest into a teaspoon, said NICER astrophysicist Zaven Arzoumanian of NASA’s Goddard Space Flight Center in Greenbelt, Md. “We don’t know what happens to matter when it’s crushed to this extreme point.”

    The more massive the neutron star, the more extreme the conditions in its core. Jammed together at tremendous densities, particles may form unusual states of matter. For example, particles known as quarks — usually contained within protons and neutrons — may roam freely in a neutron star’s center.

    The core’s composition determines its squeezability. For example, if quarks are free agents within the most massive neutron stars, the immense pressure will compress the neutron star’s core more than if quarks remain within neutrons. Because of that compressibility, for neutron stars, more mass doesn’t necessarily translate to a larger diameter. If neutron star matter is squishy, the objects could counterintuitively shrink as they become more massive (SN: 8/12/20).

    To understand how neutron star innards respond to being put through the cosmic wringer, scientists used the X-ray telescope NICER aboard the International Space Station to estimate the diameters of rapidly spinning neutron stars called pulsars. In 2020, NICER sized up a pulsar with a mass about 1.4 times the sun’s: It was about 26 kilometers wide (SN: 1/3/20).

    Researchers have now gauged the girth of the heftiest confirmed neutron star, with about 2.1 times the mass of the sun. But the beefy neutron star’s radius is about the same as its more lightweight compatriot’s, according to two independent teams within the NICER collaboration. Combining NICER data with measurements from the European Space Agency’s XMM-Newton satellite, one team found a diameter of around 25 kilometers while the other estimated 27 kilometers, physicists reported in a news conference and in two talks at the meeting.

    Many theories predict that the more massive neutron star should have a radius that is smaller. “That it is not tells us that, in some sense, the matter inside neutron stars is not as squeezable as many people had predicted,” said astrophysicist Cole Miller of the University of Maryland in College Park, who presented the second result.

    “This is a bit puzzling,” said astrophysicist Sanjay Reddy of the University of Washington in Seattle, who was not involved in the research. The finding suggests that inside a neutron star, quarks are not confined within neutrons, but they still interact with one another strongly, rather than being free to roam about unencumbered, Reddy said.

    The measurements reveal another neutron star enigma. Pulsars emit beams of X-rays from two hot spots associated with the magnetic poles of the pulsar. According to the textbook picture, those beams should be emitted from opposite sides. But for both of the neutron stars measured by NICER, the hot spots were in the same hemisphere.

    “It implies that we have a somewhat complex magnetic field,” said NICER astrophysicist Anna Watts of the University of Amsterdam, who presented the first team’s result. “Your beautiful cartoon of a pulsar … is for these two stars completely wrong. And that’s brilliant.”

    two beams of light stream out from the bottom of a bright orb in the center of the pictureBeams of radiation are emitted from the magnetic poles of spinning neutron stars called pulsars. Scientists typically envision pulsars with two beams on opposite sides, like a lighthouse. But the beams of a newly measured pulsar (illustrated) come from the same hemisphere.NASA’s Goddard Space Flight Center

    in Science News on April 20, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Editor’s tips for passing journal checks

    You’ve painstakingly mapped out your research goal: to answer that unanswered question. You’ve conducted your experiments, analyzed the results and written your paper. Now it’s off to a journal. And the process begins. PLOS editors have seen it all and want to help get your paper published as quickly as possible.

    What does the journal office look for, and what are the potential pitfalls? More importantly, how can you ensure that your manuscript passes journal checks and moves on to peer review quickly? Here, PLOS staff discuss a few of the most common reasons why a manuscript is rejected during the initial technical check, and how to avoid them.

    For a bit of background, after a manuscript is submitted to a scientific journal it undergoes a series of technical and ethical checks. Submissions that pass this initial screening go on to editorial assessment and peer review. Submissions that don’t meet requirements, or don’t provide enough information, may be returned to the authors for clarification. This can extend review times and even lead to a manuscript being rejected without review. Below are 5 checks and tips on how to smoothly get past them.

    Check #1: Sense check

    Quite simply, does the manuscript make sense as a submission? Is it a scientific article? Are all the typical parts of an article (abstract, introduction, methods, results/discussion, figures citations) all present? Is the language clear and understandable?

    How to pass it: Make sure that your manuscript is complete, and that the writing is clear and unambiguous. Note that it doesn’t have to be perfect at this stage, just precise enough for fellow researchers in your field to understand and evaluate your work.

    Read PLOS’ guide to editing your work

    Check #2: Journal fit and scope

    Journals tend to specialize in particular subjects and types of studies. “The biggest reason we reject without review is scope.” Explains Kendall McKenzie, Managing Editor of PLOS Neglected Tropical Diseases.Our scope page breaks down the diseases and categories of research we’re interested in, and even specifically states the kinds of things we don’t consider.”

    How to pass it: It comes down to submitting the right manuscript to the right publication. Carefully investigate the journal’s scope before submitting to ensure that your manuscript has a good chance of publication. If your particular article is on the edge of the journal’s expressed scope, or if you’re just not sure, search the journal for similar articles; if there are no comparable publications, your study is likely out of scope.

    Read PLOS’ guide to choosing a journal

    Check #3: Acceptance criteria

    Laura Simmons, Managing Editor of PLOS Genetics agrees. “In addition to scope, our Editors in Chief and Section Editors may reject without review if a submission is lacking in biological or mechanistic insight (i.e. if it is too descriptive), or if the research doesn’t represent a significant advance in the field.”

    How to pass it: This one is all about doing your research. Different journals have different criteria for publication. Consult the journal website and consider whether your study fulfills the requirements and mission of the journal. Does the journal publish the type of research your study describes? Will your article appeal to the readers the journal serves? If not, consider a more specialized publication that focuses specifically on the type of research you are conducting, or, alternatively, a journal with a broader, more inclusive scope.

    Check #4: Plagiarism

    Most journals run an automated check that looks for similarities between your manuscript and previously published works. If the manuscript scores above a certain threshold, members of the journal staff will take a closer look at your manuscript to ensure that any direct quotes are framed within quotation marks and properly cited. “Overall the most common issue we see is authors reusing their own methods section, introduction, or conclusion from previous or related studies,” explains PLOS ONE Publishing Editor Emma Stillings. Authors don’t always realize that “you have to cite everyone, even yourself, to avoid any delay in the peer review process.”

    How to pass it: Any direct quotes must be framed within quotation marks and properly attributed. That includes your own prior works. Try to avoid reusing text, and especially copy-pasting from your other papers. Check to make sure that any summaries or allusions are properly cited as well.

    Check #5: Complete and consistent ethical, funding, data, and other statements

    If the statements in the submission form are unclear, lacking detail, or otherwise incomplete, the process will pause while the journal office contacts the authors for more information. Similarly, if the statements within the manuscript are different from those in the submission system, the journal office will work with the authors to reconcile them before the manuscript can advance.

    How to pass it: Label and save the paperwork from the early part of your research process: funding information, committee approval documents, permits, permission forms, patient disclosure statements, study designs, and any other materials. You may need them to complete your submission form. When you are ready to submit, proofread carefully to ensure that everything in your manuscript is up-to-date and clear. Double check to make sure that any placeholder text has been replaced with the final version.

    Read PLOS’ guide to scientific ethics & preparing data

    Final words of wisdom

    “It’s so important to familiarize yourself with a journal before submitting. What’s the scope of the journal? What article types do they publish? Are you adhering to the guidelines for that particular article type? Making sure you’re informed about what type of work the journal publishes and how, can go a long way in deciding where to submit and speeding your manuscript through the initial submission stages.” Eileen Clancy, Managing Editor of PLOS Pathogens

    The post Editor’s tips for passing journal checks appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on April 20, 2021 02:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Videocalling needed more than a pandemic to finally take off. Will it last?

    Eileen Donovan, an 89-year-old mother of seven living in a Boston suburb, loved watching her daughter teach class on Zoom during the coronavirus pandemic. She never imagined Zoom would be how her family eventually attended her funeral.

    Donovan died of Parkinson’s disease on June 30, 2020, leaving behind her children, 10 grandchildren and six great-grandchildren. She always wanted a raucous Irish wake. But only five of her children plus some local family could be there in person, and no extended family or friends, due to coronavirus concerns. This was not the way they had expected to mourn.

    For online attendees, the ceremony didn’t end with hugs or handshakes. It ended with a click on a red “leave meeting” button, appropriately named for business meetings, but not much else.

    It’s the same button that Eileen Donovan-Kranz, Donovan’s daughter, clicks when she finishes an English lecture for her class of undergraduate students at Boston College. And it’s the same way she and I ended our conversation on an unseasonably warm November day: Donovan-Kranz sitting in front of a window in her dining room in Ayer, Mass., and me in my bedroom in Manhattan.

    “I’m not going to hold the phone during my mother’s burial,” she remembers thinking. Just a little over a year ago, it would have seemed absurd to have to ask someone to hold up a smartphone so that others could “attend” such a personal event. Donovan-Kranz asked her daughter’s fiancé to do it.

    The COVID-19 pandemic has profoundly changed the way people interact with each other and with technology. Screens were for reminiscing over cherished memories, like watching VHS tapes or, more recently, YouTube videos of weddings and birthdays that have already happened. But now, we’re not just watching memories. We’re creating them on screens in real time.

    As social distancing measures forced everyone to stay indoors and interact online, multibillion-dollar industries have had to rapidly adjust to create experiences in a 2-D world. And although this concept of living our lives online — from mundane work calls to memorable weddings or concerts — seems novel, both scientists and science fiction writers have seen this reality coming for decades.

    In David Foster Wallace’s 1996 novel Infinite Jest, videotelephony enjoys a brief but frenzied popularity in a future America. Entire industries emerge to address people’s self-consciousness on camera. But eventually, the industry collapses when people realize they prefer the familiar voice-only telephone.

    Despite multiple efforts by inventors and entrepreneurs to convince us that videoconferencing had arrived, that reality didn’t play out. Time after time, people rejected it for the humble telephone or for other innovations like texting. But in 2020, live video meetings finally found their moment.

    It took more than just a pandemic to get us here, some researchers say. Technological advances over the decades together with the ubiquity of the technology got everyone on board. But it wasn’t easy.

    Initial attempts

    On June 30, 1970 — exactly half a century before Donovan’s death — AT&T launched what it called the nation’s first commercial videoconferencing service in Pittsburgh with a call from Peter Flaherty, the city’s mayor, to John Harper, chairman and CEO of Alcoa Corporation, one of the world’s largest producers of aluminum. Alcoa had already been using the Alcoa Picturephone Remote Information System for retrieving information from a database using buttons on a telephone. The data would be presented on the videophone display. This was before desktop computers were ubiquitous.

    This was not AT&T’s first videophone, however. In 1927, U.S. Secretary of Commerce Herbert Hoover had demonstrated a prototype developed by the company. But by 1972, AT&T had a mere 32 units in service in Pittsburgh. The only other city offering commercial service, Chicago, hit its peak sales in 1973, with 453 units. AT&T discontinued the service in the late 1970s, concluding that the videophone was “a concept looking for a market.”

    group photo of people at Bell Telephone Laboratories in 1927AT&T President Walter Sherman Gifford (third from right) makes a videocall at Bell Telephone Laboratories in New York City on April 7, 1927. The call went to U.S. Secretary of Commerce Herbert Hoover in Washington, D.C., via 300 miles of long-distance wire.Federal Communications Commission/PhotoQuest/Getty Images

    About a decade after AT&T’s first attempt at commercialization, a band called the Buggles released the single “Video Killed the Radio Star,” the first music video to air on MTV. The song reminded people of the technological change that occurred in the 1950s and ’60s, when U.S. households transitioned away from radio as televisions became more accessible to the masses.

    The way television achieved market dominance kept videophone developers bullish about their technology’s future. In 1993, optimistic AT&T researchers predicted “the 1990s will be the video communication decade.” Video would change from something we passively consumed to something we interacted with in real time. That was the hope.

    When AT&T launched its VideoPhone 2500 in 1992, prices started at a hefty $1,500 (about $2,800 in today’s dollars) — later dropping to $1,000. The phone had compressed color and a slow frame rate of 10 frames per second (Zoom calls today are 30 frames per second), so images were choppy.

    Though the company tried to enchant potential customers with visions of the future, people weren’t buying it. Fewer than 20,000 units sold in the five months after the launch. Rejection again.

    Building capacity

    Last June, to commemorate the 50th anniversary of AT&T’s first videophone launch, William Peduto, Pittsburgh’s mayor, and Michael G. Morris, Alcoa’s chairman at the time, spoke over videophone, just as their predecessors had done.

    Several scholars, including Andrew Meade McGee, a historian of technology and society at Carnegie Mellon University in Pittsburgh, joined for an online panel to discuss the rocky history of the videophone and its 2020 success. McGee told me a few months later that two things are crucial for a product’s actual adoption: “capacity and circumstance.” Capacity is all about the technology that makes a product easy to use and affordable. For videophones, it’s taken a while to get there.

    When the Picturephone, which was launched by AT&T and Bell Telephone Laboratories, premiered at the 1964 World’s Fair in New York City, a three-minute call cost $16 to $27 (that’s about $135 to $230 in 2021). It was available only in booths in New York City, Chicago and Washington, D.C. (SN: 8/1/64, p. 73). Using the product required planning, effort and money — for low reward. The connection required multiple phone lines and the picture appeared on a black-and-white screen about the size of today’s iPhone screens.

    man and woman watching Lady Bird Johnson on a prototype AT&T videophoneLady Bird Johnson, who was then first lady of the United States, is visible on the screen of a prototype AT&T videophone in 1964.Everett Collection Historical/Alamy Stock Photo

    These challenges made the Picturephone a tough sell. Marketing researchers Steve Schnaars and Cliff Wymbs of Baruch College at the City University of New York theorized why videophones hadn’t taken off decades before in Technological Forecasting and Social Change in 2004. Along with capacity and circumstance, they argued, critical mass is key.

    For a technology to become popular, the researchers wrote, everybody needs the money and motivation to adopt it. And potential users need to know that others also have the device — that’s the critical mass. But when everyone uses this logic, no one ends up buying the new product. Social networking platforms and dating apps face the same hurdle when they launch, which is why the apps create incentive programs to hook those all-important initial users.

    Internet access

    Even in the early 2000s, when Skype made a splash with its Voice over Internet Protocol, or VoIP, enabling internet-based calls that left landlines free, people weren’t as connected to the internet as they are today. In 2000, only 3 percent of U.S. adults had high-speed internet, and 34 percent had a dial-up connection, according to the Pew Research Center.

    By 2019, the story had changed: Seventy-three percent of all U.S. adults had high-speed internet at home; with 63 percent coverage in rural areas. Globally, the number of internet users also increased, from about 397 million in 2000, to about 2 billion in 2010 and 3.9 billion in 2019.

    But even after capacity was established, we weren’t glued to our videophones as we are today, or as inventors predicted years ago. Although Skype claimed to have 300 million users in 2019, Skype was a service that people typically used on occasion, for international calls or as something that took advance planning.

    One long-time barrier that the Baruch College researchers cite from an informal survey is the aversion to always being “on.” Some people would have paid extra to not be on camera in their home, the same way people would pay extra to have their phone numbers left out of telephone books.

    “Once people experienced [the 1970s] videophone, there was this realization that maybe you don’t always want to be on a physical call with someone else,” McGee says. Videocalling developers had predicted these challenges early on. In 1969, Julius Molnar, vice president at Bell Telephone Labs, wrote that people will be “much concerned with how they will appear on the screen of the called party.”

    A scene from the 1960s cartoon The Jetsons illustrates this concern: George Jetson answers a videophone call. When he tells his wife Jane that her friend Gloria is on the phone, Jane responds, “Gloria! Oh dear, I can’t let her see me looking like this.” Jane grabs her “morning mask” — for the perfect hair and face — before taking the call.

    That aversion to face time is one of the factors that kept people away from videocalling.

    It took the pandemic, a change in circumstance, to force our hand. “What’s remarkable,” McGee says, “is the way in which large sectors of U.S. society have all of a sudden been thrust into being able to use videocalls on a daily basis.”

    Circumstance shift

    Starting in March 2020, mandatory stay-at-home orders around the world forced us to carry on an abridged form of our pre-pandemic lives, but from a distance. And one company beat the competition and rose to the top within a matter of months.

    Soon after lockdown, Zoom became a verb. It was the go-to choice for all types of events. The perfect storm of capacity and circumstance led to the critical mass needed to create the Zoom boom.

    Before Zoom, a handful of companies had been trying to fill the space that AT&T’s videophone could not. Skype became the sixth most downloaded mobile app of the decade from 2010 to 2019. FaceTime, WhatsApp, Instagram, Facebook Messenger and Google’s videochatting applications were and still are among the most popular platforms for videocalls.

    Then 2020 happened.

    Zoom beat its well-established competitors to quickly become a household name globally. It gained critical mass over other platforms by being easy to use.

     “The fact that it’s been modeled around this virtual room that you come into and out of really simplifies the connection process,” says Carman Neustaedter of the School of Communication, Art and Technology at Simon Fraser University in Burnaby, Canada, where his team has researched being present on videocalls for work, home and health.

    Zoom reflects our actions in real life — where we all walk into a room and everyone is just there. Casual users don’t need to have an account or connect ahead of time with those we want to talk to.

    Beyond design, there were likely some market factors at play as well. Zoom connected early with universities, claiming by 2016 to be at 88 percent of “the top U.S. universities.” And just as K–12 schools worldwide started closing last March, Zoom offered free unlimited meeting minutes.

    In December 2019, Zoom statistics put its maximum number of daily meeting participants (both paid and free) at about 10 million. In March 2020, that number had risen to 200 million, and the following month it was up to 300 million. The way Zoom counts those users is a point of contention.

    But these numbers still provide some insight: If the product wasn’t easy and helpful, we wouldn’t have kept using it. That’s not to say that Zoom is the perfect platform, Neustaedter says. It has some obvious shortcomings.

    “It’s almost too rigid,” he says.

    It doesn’t allow for natural conversation; participants have to take turns talking, toggling the mute button to let others take a turn. Even with the ability to send private and direct messages to anyone in the room, the natural way we form groups and make small talk in real life is lost with Zoom.

    It’s also not the best for parties — it’s awkward to attend a birthday party online when only one out of 30 friends can talk at a time. That’s why some people have been enticed to switch to other videocalling platforms to host larger online events, like graduations.

    For example, Remo, founded in 2018, uses visual virtual rooms. Everyone gets an avatar and can choose a table after seeing who else is there, to talk in smaller groups. Instead of Zoom breakout sessions where you’re assigned a room and can’t enter another one on your own, a platform like Remo allows you to virtually see all the rooms and pick one, exit it and go to another one all without the help of a host.

    The rigidity also results in Zoom fatigue, that feeling of burnout associated with overusing virtual platforms to communicate. Videocalling doesn’t allow us to use direct eye contact or easily pick up nonverbal cues from body language — things we do during in-person conversations.

    The psychological rewards of videocalling — the chance to be social — don’t always outweigh the costs.

    Jeremy Bailenson, director of the Virtual Human Interaction Lab at Stanford University, laid out four features that lead to Zoom fatigue in the Feb. 23 Technology, Mind and Behavior. Along with cognitive load and reduced mobility, he blames the long stretches of closeup eye gazing and the “all-day mirror.” When you constantly see yourself on camera interacting with others, self-consciousness and exhaustion set in.

    Bailenson has since changed his relationship with Zoom: He now hides the box that lets him view himself, and he shrinks the size of the Zoom screen to make gazing faces less imposing. Bailenson expects minor changes to the platform will help reduce the psychological heaviness we feel.

    Other challenges with Zoom have revolved around security. In April 2020, the term “Zoombombing” arose as teleconferencing calls on the platform were hijacked by uninvited people. Companies that could afford to switch quickly moved away from Zoom and paid for services elsewhere. For everyone else who stayed on the platform, Zoom added close to 100 new privacy, safety and security features by July 2020. These changes included the addition of end-to-end encryption for all users and meeting passcodes.

    Anybody’s guess

    In Metropolis, the 1927 sci-fi silent film, a master of an industrial city in the dystopian future uses four separate dials on a videophone to put a call through. Thankfully, placing a videocall is much easier than it was predicted to be. But how much will we use this far-from-perfect technology once the pandemic is over?

    In the book Productivity and the Pandemic, released in January, behavioral economist Stuart Mills discusses why consumers might keep using videocalling. This pandemic may establish habits and preferences that will not disappear once the crisis is over, Mills, of the London School of Economics, and coauthors write. When people are forced to experiment with new behaviors, as we did with the videophone during this pandemic, the result can be permanent behavioral changes. Collaboration through videocalling may remain popular even after shutdowns lift now that we know how it works.

    Events that require real-life interactions, such as funerals and some conferences, may not change much from what we were used to pre-pandemic.

    For other industries, videocalling may change certain processes. For example, Reverend Annie Lawrence of New York City predicts permanent changes for parts of the wedding industry. People like the ease of getting a marriage license online, and she’s been surprisingly in demand doing video weddings since the pandemic started. Before, getting booked for officiating a wedding would require notice months in advance. “Now, I’ve been getting calls on Friday to ask if I can officiate a wedding on Saturday,” she says.

    Other sectors of society may realize that videocalling isn’t for them, and will leave just a few processes to be done online. Jamie Dimon, CEO of JPMorgan Chase, for example, stated in a March 1 interview with Bloomberg Markets and Finance that he thinks a large portion of his staff will permanently work in the office when that becomes possible again. Culture is hard to build on Zoom, relationships are hard to strengthen and spontaneous collaboration is difficult, he said. And there’s research that backs this.

    But none of these changes or reversions to our previous normal are a sure bet. We may find, just like in Wallace’s satirical storyline, that videocalls are just too much stress, and the world will revert back to phone calls and face-to-face time. We may realize that even when the technology gets better, the lifting of shutdowns and return to in-person life may mean fewer people are available for videocalls.

    It’s hard to say which scenario is the most likely to play out in the long run. We’ve been terribly wrong about these things before.

    in Science News on April 20, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Describing Groups To Children Using Generic Language Can Accidentally Teach Them Social Stereotypes

    By Matthew Warren

    When we talk to children about the characteristics of boys and girls, our word choice and syntax can profoundly shape what they take away from the conversation. Even attempts to dispel stereotypes can backfire: as we recently reported, telling kids that girls are “as good as” boys at maths can actually leave them believing that boys are naturally better at the subject and that girls have to work harder.

    Other work has shown that “generic” language can also perpetuate stereotypes: saying that boys “like to play football”, for instance, can make children believe that all boys like to play football, or that liking football is a fundamental part of being a boy.

    Now a study in Psychological Science shows that when kids hear this kind of generic language, they don’t just make assumptions about the group that is mentioned — they also make inferences about unmentioned groups. That is, if children hear that boys like to play football, they might deduce that girls do not.

    To examine these kinds of inferences, Kelsey Moty and Marjorie Rhodes from New York University first asked 287 kids aged 4 to 6 to watch a video about a town that is home to two groups of people: “zarpies” and “gorps”. First, a narrator introduced these groups, outlining a few of their characteristics (zarpies, for instance, “like to climb tall fences”, while gorps “like to draw stars on their knees”).

    The kids then saw more zarpies and gorps, while hearing either generic or specific statements about them. A generic statement, for instance, would be “zarpies are good at baking pizza”, while a specific statement would be “this zarpie is good at baking pizza”. 

    Finally, the participants saw another zarpie and another gorp, and were asked whether each of these were good at baking pizza (or whatever activity the statement had been about).

    Consistent with past work, kids who had heard the generic statement were more likely than those who had heard the specific statement to infer that the new zarpie was good at baking pizza. But these participants were also more likely to infer that the new gorp was not good at baking pizza. That is, generic language seemed to lead the children to make assumptions even about members of the unmentioned group.

    Interestingly, these inferences were made by kids as young as 4-and-a-half. And as the children got older, the more likely they were to make them: almost all 7-year-olds who had heard the generic statement said that the new zarpie was good at baking pizza and that the new gorp was not good at baking pizza (a group of adults also made near unanimous judgements along these lines).

    Could it simply be that the children had been shown two apparently contrasting groups, so just assume that a gorp must be the opposite of a zarpie in all ways? A later study suggests that this isn’t the case. In this experiment, the video was presented either by a knowledgeable narrator who lived in the neighbourhood, or an unknowledgeable one who was visiting for the first time.  The children again made inferences about the unmentioned group based on generic statements — but only when the speaker was knowledgeable. This suggests that children actually reason about what a speaker knows and what information they intend to convey, in order to make their inferences.

    Overall, then the work suggests that children (like adults) do make inferences about unmentioned categories when they hear generic statements — particularly if they think that the speaker knows what they’re talking about. So it’s easy to see how generic language could inadvertently perpetuate gender stereotypes.

    Of course, the researchers didn’t explicitly test the kids’ beliefs about boys and girls: they used the fictional groups of zarpies and gorps instead, precisely so the kids wouldn’t be influenced by existing stereotypes. But it would still be interesting to know whether children make similar inferences about boys and girls — perhaps researchers could try and minimise the influence of existing stereotypes by using fictional activities instead (for instance, if kids were told that girls were good at “plarping”, what would they think about a boy’s plarping skills?). Still, the study provides yet another striking example of how the way we speak to children shapes their beliefs about social groups.

    The Unintended Consequences of the Things We Say: What Generic Statements Communicate to Children About Unmentioned Categories

    Matthew Warren (@MattBWarren) is Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on April 20, 2021 12:31 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    50 years ago, scientists claimed marijuana threatened teens’ mental health

    cover of the April 24, 1971 issue

    The continuing battle over pot Science News, April 24,1971

    The White House Conference on Youth voted to legalize the sale of grass (with restrictions). On the same day, the Journal of the American Medical Association published an article condemning the use of marijuana by the young.… The researchers conclude that marijuana smoking is particularly harmful to the adolescent. It adds unnecessary anxieties to the already disturbing problems of physical and psychological maturation.

    Update

    Fifty years after the recommendation to legalize the recreational use of marijuana, at least 15 U.S. states have done so. In that time, a growing body of research has strengthened the link between teen marijuana use and mental health effects, including an increased risk of depression later in life. Such health concerns partly explain why people younger than 21 are prohibited from recreationally using pot. But pot use is prevalent among U.S. middle and high school students: About 25 percent of students in grades 8, 10 and 12 disclosed using the drug in 2020, scientists report.

    in Science News on April 20, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “[N]o intention to make any scientific fraud” as researchers lose four papers

    Researchers in India have lost four papers in journals belonging to the Royal Society of Chemistry over concerns that the images in the articles appear to have been doctored.  The senior author on the articles is  Pralay Maiti, of the School of Material Science & Technology at Banaras Hindu University, in Varanasi.  “Polycaprolactone composites with … Continue reading “[N]o intention to make any scientific fraud” as researchers lose four papers

    in Retraction watch on April 20, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Here’s what we know about B.1.1.7, the U.S.’s dominant coronavirus strain

    In December 2020, health officials in the United Kingdom announced that a new coronavirus variant was rapidly spreading across the region. Weeks later, U.S. officials found the first case in the United States (SN: 12/22/20). And by early April, the variant had become the most common form of the coronavirus identified across the country, an event that the U.S. Centers for Disease Control and Prevention had warned in January might happen (SN: 1/15/21)

    The news came amid a surge in coronavirus cases in many states, including Michigan, where the new variant, dubbed B.1.1.7, makes up nearly 58 percent of genetically screened samples collected as of March 27. The variant is less prevalent in California, New York and other states, where homegrown versions of concerning coronavirus variants are currently causing the majority of cases instead. 

    Following the emergence of the variant in the United Kingdom, scientists have worked to get a handle on how mutations in the virus’s genetic blueprint might change its behavior, amid concerns that the virus might have gained the ability to evade vaccines or cause more severe disease. Here’s what researchers have learned so far about B.1.1.7.

    B.1.1.7 is 40 to 70 percent more transmissible than other variants.

    Rapid spread of the coronavirus among people in a corner of London that was linked to the emergence of a variant was what first raised health officials’ concerns. In the months since, multiple studies support that initial finding that B.1.1.7 is more contagious than previous versions of the virus, on the order of around 40 to 70 percent.

    The current hypothesis for why the variant is more transmissible is that a mutation in the spike protein — which helps the coronavirus break into cells — allows the virus to attach more tightly to the cellular protein that lets it enter a new cell. That leads to a higher amount of virus in the body and a more transmissible virus, says Eleni Nastouli, a clinical virologist at University College London.

    Another possibility is that B.1.1.7 hangs out in the body for longer than other variants, giving people more time to transmit it. Or it could cause certain symptoms, such as a cough, more frequently, which might help the virus spread. A study conducted through the U.K. Office for National Statistics, for example, previously found that those infected with the variant were slightly more likely to have cough, sore throat, fatigue or muscle pain. But a more recent study counters those results. Among 36,920 people living in the United Kingdom who used an app to report COVID-19 symptoms, there were no differences in symptoms linked to B.1.1.7 compared with ones caused by other versions of the coronavirus, researchers report April 12 in the Lancet Public Health.

    “We need a bit more work to find out what’s really going on,” says Mark Graham, a medical imaging expert at King’s College London who led the new work. But overall, there are no dramatic changes, he explains. “It’s not like suddenly you don’t get loss of smell with B.1.1.7 or anything like that. All the key symptoms are there.”  

    Overall, B.1.1.7 is probably more lethal, too.

    One worrying trait that has emerged is that B.1.1.7 seems to be more lethal than other versions of the coronavirus. Infection with the variant raises the risk of death overall by around 60 percent, studies suggest.

    Zeroing in on hospitalized patients, a group at high risk of death, however, reveals no link between infections with B.1.1.7 and risk of severe disease or death, Nastouli and her colleagues report April 12 in the Lancet Infectious Diseases. “That is obviously a positive message,” Nastouli says. Still, “it doesn’t mean that it is a less deadly virus.”

    That’s because B.1.1.7 spreads more easily than other variants, meaning it can infect more people, some of whom will die. More are likely to end up in the hospital, compared with previous variants, “but the moment that you come to the hospital, you having the variant doesn’t make a difference in terms of your outcome of severity and death,” Nastouli says. The researchers didn’t see an increased risk of death even after adjusting for factors such as age, underlying conditions or ethnicity.

    That finding still fits with the current evidence hinting that the virus is more deadly overall, says Nicholas Davies, an evolutionary biologist and epidemiologist at the London School of Hygiene and Tropical Medicine who was not involved in the work.

    Still, studies done outside of the United Kingdom are needed to confirm the results, Davies and Nastouli say.

    Vaccines — and prior infection — still appear to protect people from B.1.1.7.

    Although some mutations seen in B.1.1.7 raised concerns that the variant could dodge parts of the immune response, evidence is building that vaccines and previous infections are still protective.

    The April 12 Lancet Public Health study, for instance, did not find evidence that B.1.1.7 caused a surge in reinfections in the United Kingdom as the variant rose to dominance in December 2020, Graham says. “That might suggest that B.1.1.7 is not able to evade immunity that people have acquired from infections to previous strains.”  

    Recent data from Israel — a country that has vaccinated more than half its population — also suggest that B.1.1.7 is not infecting people who are fully vaccinated with Pfizer’s jab, instead primarily infecting those who are unvaccinated or partially immunized, researchers report in a preliminary study posted April 9 at medRxiv.org.

    So, for now, “we don’t really have to worry about alternative vaccines” for B.1.1.7, Graham says. Current vaccines “do work against it.”  

    Researchers have their eyes on other variants.

    While B.1.1.7 poses threats because of its rapid spread and increased risk of hospitalization, other variants are also worrisome. That’s in part because while vaccines still appear to work for B.1.1.7, the shots might be less effective against other variants of the virus.

    Studies done in lab dishes hint that a variant called B.1.351, first identified in South Africa, can evade some antibodies from vaccinated people or those who had been infected with other variants (SN: 1/27/21). But the immune response is multifaceted, so researchers need data from the real world to pinpoint the effect on vaccines.

    The study in Israel found that people fully vaccinated with Pfizer’s shot were more likely to be infected with B.1.351 compared with other variants. But there were few cases with B.1.351 overall, suggesting the actual odds of infection are still unknown. In a small trial in South Africa with 800 participants, however, where B.1.351 is prevalent, nine out of nine COVID-19 cases were in people who did not receive Pfizer’s shot, the pharmaceutical company announced April 1. That hints that the shot is likely still effective against B.1.351, which caused six of those cases.

    Another variant dubbed P.1, found in Brazil, also appears to be more transmissible than earlier strains (SN: 4/14/21). People who have already recovered from COVID-19 have around 54 to 79 percent of protection against P.1 as they do against other variants circulating in the country. It is still unclear how protective currently authorized vaccines might work against P.1.      

    That’s particularly worrisome as COVID-19 cases are climbing in countries such as Brazil and India. A variant with two key mutations thought to increase transmission and allow the virus to evade the immune response, for instance, was recently identified in India and has since spread to other countries. Amid such surges, additional new variants could emerge, which may put the end of the pandemic further out of reach.  

    in Science News on April 19, 2021 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Opinions Based On Feelings Are Surprisingly Stable

    By Emily Reynolds

    Emotional states can be fleeting and somewhat inexplicable — you can feel great one minute and down in the dumps the next, sometimes for no apparent reason. It follows, then, that opinions based on emotion are likely to be equally fleeting: if you’re in a bad mood when you take part in a survey or review a product, then surely the attitudes measured and recorded will be just as transient too.

    But according to a series of studies by Matthew D. Rocklage from the University of Massachusetts Boston and Andrew Luttrell from Ball State University, this isn’t actually the case. Instead, they report in Psychological Science, attitudes based in emotion are actually more stable: the more emotional an opinion, the less it changes over time.

    In the first study, participants were asked to think of three gifts they had recently received, before selecting their attitude towards the gift from a list of adjectives, some overtly negative, some overtly positive and some neutral, such as “amazing”, “boring”, “terrifying”, or “valuable”. Participants listed gifts ranging from electric toothbrushes to Star Wars figurines.

    One month later, participants were asked to think of the same gifts and again select adjectives that represented their feelings about them. After the second part of the study, the adjectives chosen by participants were coded for positive or negative valence, extremity, and emotionality. (Although these may seem similar, emotionality relates to how much an attitude is based on emotion, while extremity measures the extent to which an attitude is positive or negative; “outstanding”, for example, has high emotionality but low extremity.)      

    Those participants who chose more extreme adjectives, whether positive or negative, were less likely to see a change in the valence of the adjectives used to describe their gifts at the second time point. Similarly, the more an attitude was based on emotion, the less it changed too. A second study, which looked at attitudes towards brands, also found that emotionally-based attitudes changed less over time.

    The third study looked at attitudes in a more naturalistic setting: reviews of products posted online. The team obtained all reviews for all restaurants in Chicago over a period of twelve years, looking only at reviewers who had posted more than one review of the same establishment. The team then analysed both the emotional valence of the reviews and measured any differences in the number of stars the reviewers gave the restaurants at each time point.

    As in previous studies, positive emotionality consistently predicted less change in attitude across time, though negative emotionality did not. Positive extremity also predicted less change in attitude, while negative extremity predicted more.

    A final study looked at whether exposure to messages designed to evoke emotions actually increase the likelihood of people developing fixed attitudes. To do this, the team assigned participants to two conditions. In one, they saw a message about a fictional aquatic animal called the “lemphur” designed to elicit high emotion, reading about a touching underwater encounter between the creature and a diver. In the low emotion condition, participants read a fact-based message about the lemphur similar to an encyclopedia entry.

    After reading the text, participants indicated their attitude towards the animal, selecting from the same list of adjectives used in the first study. In follow-up studies over the next few days, participants selected adjectives again.

    Those in the high-emotion condition were, unsurprisingly, more likely to indicate a more emotional response to the animal than those in the low-emotion condition, and also had a more extreme response. Those in the high-emotion condition also saw less change in their attitude towards the creature over time.

    Overall, emotional responses were related to more fixed attitudes. Notably, positive emotionality had a particularly strong effect, which may be a useful to know for the creation of public health messaging or other attempts at attitude change — inducing positive emotions, rather than negative emotions like shame, may be more beneficial. Whether positive emotions have a similar effect on actual behaviour, rather than just attitudes, remains to be seen.

    Attitudes Based on Feelings: Fixed or Fleeting?

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on April 19, 2021 11:19 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    NASA’s Ingenuity helicopter made history by flying on Mars

    Editor’s note: This story will be updated periodically.


    A helicopter just flew on Mars. NASA’s Ingenuity hovered for about 40 seconds above the Red Planet’s surface, marking the first flight of a spacecraft on another planet.

    In the wee hours on April 19, the helicopter spun its carbon fiber rotor blades and lifted itself into the thin Martian air. It rose about three meters above the ground, pivoted to look at NASA’s Perseverance rover, took a picture, and settled back down to the ground.

    “Goosebumps. It looks just the way we had tested it in our test chambers,” Ingenuity project manager MiMi Aung said in a news briefing after the flight. “Absolutely beautiful flight. I don’t think I can ever stop watching it over and over again.”

    As data from the flight started coming in to Ingenuity’s mission control room at NASA’s Jet Propulsion Lab in Pasadena, Calif., at about 6:35 a.m. EDT, a hush fell. And then cheers erupted as Håvard Grip, Ingenuity’s guidance, navigation and control lead, announced: “Confirmed that Ingenuity has performed its first flight, the first flight of a powered aircraft on another planet.”

    image of the Ingenuity helicopter's shadow during its first flightNASA’s Ingenuity helicopter took this photo of its own shadow while hovering about three meters in the Martian air on April 19.JPL-Caltech/NASA

    “It’s amazing, brilliant. Everyone is super excited,” said mechanical engineer and team member Taryn Bailey. “I would say it’s a success.”

    The flight, originally scheduled for April 11, was delayed to update the helicopter’s software after a test of the rotor blades showed problems switching from preflight to flight mode. After the reboot, a high-speed spin test April 16 suggested the shift was likely to work, setting the stage for the April 19 flight.

    “I never let you celebrate fully. Every time we hit a major milestone I’m like, not yet, not yet,” Aung told the team moments after the flight was confirmed. Now is the moment to celebrate, she said. “Take that moment and after that, let’s get back to work and more flights. Congratulations.”

    Ingenuity lifted into the thin Martian atmosphere for the first time on April 19, proving that flight is possible on another planet. This video was taken by the Perseverance rover, which watched from a safe distance away.

    This first-ever flight was a test of the technology; Ingenuity won’t do any science during its mission, set to last 30 Martian days from the moment it separated from the rover, the equivalent of 31 days on Earth. But its success proves that powered flight is possible in Mars’ thin atmosphere. Future aerial vehicles on Mars could help rovers or human astronauts scout safe paths through unfamiliar landscapes, or reach tricky terrain that a rover can’t traverse.

    “Technology demonstrations are really important for all of us,” said Thomas Zurbuchen, associate administrator for NASA’s Science Mission Directorate. “It’s taking a tool we haven’t been able to use and putting it in the box of tools we have available for all of our missions at Mars.”

    Ingenuity’s flight was the culmination of more than seven years of imagining, building, testing and hoping for the flight team.

    “That is what building first-of-a-kind systems and flight experiments are all about: design, test, learn from the design, adjust the design, test, repeat until success,” Aung said in a news briefing on April 9.

    Aung and her team began testing early prototypes of a Mars helicopter in a 7.62-meter-wide test chamber at JPL in 2014. It wasn’t a given that flying on Mars would even be possible, Aung said. “It’s challenging for many different reasons.”

    Before it hitched a ride to Mars on the rover Perseverance, Ingenuity underwent extensive testing in a Mars simulator on Earth. Its engineers experimented with early prototypes and later with Ingenuity itself. These tests convinced the team that the craft could fly in Mars’ thin atmosphere.

    Even though Mars’ gravity is only about one-third of Earth’s, the air’s density is about 1 percent that at sea level on Earth. It’s difficult for the helicopter’s blades to push against that thin air hard enough to get off the ground.

    Another way to think about it is that the air is thinner on Mars than it is at three times the height of Mount Everest, Ingenuity engineer Amelia Quon of JPL said in the news briefing. “We don’t generally fly things that high,” Quon said. “There were some people who doubted we could generate enough lift to fly in that thin Martian atmosphere.”

    So Quon and her team put the helicopter through a battery of tests over the course of five years. “My job … was to make Mars on Earth, and enough of it that we could actually fly our helicopter in it,” Quon said. The Mars simulation chamber could be emptied of Earth air and pumped full of carbon dioxide at Mars-like densities. Some versions of the helicopter were suspended from the ceiling to simulate Mars’ lower gravity. And wind speeds up to 30 meters per second were simulated by a bank of about 900 computer fans blowing at the helicopter.

    The final version of Ingenuity is light, about 1.8 kilograms. Its blades are longer (about 1.2 meters wingspan) and rotate faster (about 2,400 rotations per minute) than a similar vehicle would need to be able to fly on Earth. By the time the helicopter hitched a ride to Mars with the Perseverance rover in July 2020, the engineers were confident the helicopter could fly and remain in control at Mars (SN: 7/30/20).

    Perseverance landed in a region called Jezero crater on February 18 (SN: 2/22/21). The helicopter was folded up beneath Perseverance’s belly under a protective shield until March 21.

    Over  the next few weeks, Perseverance drove around to find a flat spot for Ingenuity to launch. Then Ingenuity slowly unfolded itself and was finally lowered gently to the ground beneath Perseverance on April 3. The rover drove away quickly to get Ingenuity out of its shadow and allow the helicopter to charge its batteries with its solar panel, giving it enough power to survive the freezing Martian night. 

    Perseverance unpacking the Ingenuity helicoperIngenuity arrived on Mars folded up under the Perseverance rover in a protective shield the size of a pizza box. After landing, Perseverance dropped the shield and slowly lowered Ingenuity to the ground, then drove away.JPL-Caltech/NASA

    On April 8 and 9, Ingenuity unfolded its rotor blades and tested their ability to spin in preparation to take to the air. After trouble-shooting the software problem and retesting the rotor blades April 16, the flight got a green light for April 19. It was scheduled for roughly 3:30 a.m. Eastern Daylight Time on April 19, which corresponds to 12:30 p.m. Mars time, in the early afternoon. That gave the craft’s solar panel enough time to charge up its batteries for the flight. It was also a time when Perseverance’s weather sensors, called MEDA, suggested the average wind speed would be about six meters per second.

    Ingenuity testing its rotors on MarsNASA’s Ingenuity helicopter tested its spinning rotor blades on April 8, a week and a half before taking flight in the thin Martian air for the first time.JPL-Caltech/NASA

    Ingenuity had to pilot itself through the flight. That’s partly because of the communication delay — Mars is far enough from Earth that light signals take about 15 minutes to travel between the two planets. But it’s also because Mars’ thin air makes the helicopter difficult to steer. “Things happen too quickly for a human pilot to react to it,” Quon said.

    Perseverance filmed the flight from about 65 meters away, at a spot named Van Zyl Overlook. Ingenuity also filmed the flight from its own perspective, with two sets of cameras: Its downward facing navigation cameras capturing the view below it in black and white, and its color cameras scanning the horizon.

    selfie image of Perseverance rover with Ingenuity helicopter in the backgroundOver several days, NASA’s Perseverance rover gently lowered the Ingenuity helicopter to the ground and then took this selfie with it on April 6 from about four meters away. The rover then drove off to a safe distance of 65 meters to get ready to watch Ingenuity’s first flight.MSSS/JPL-Caltech/NASA

    Now that this first flight went well, the team hopes to take up to four more flights over the course of Ingenuity’s mission, possibly starting as soon as April 22. Each will be a little bit more daring and riskier, Aung said. “We are going to continually push all the way to the limit of this rotorcraft.” And each one will be a nail-biter: Just one bad landing could end things immediately. Ingenuity has no way to right itself after a fall.

    That may be the way the mission ends, Aung admitted in the April 19 news briefing. “Ultimately, we expect the helicopter will meet its limit,” she said. Even if it eventually  wipes out in a crash, the engineering team will learn valuable information from how the helicopter eventually fails.

    At the end of Ingenuity’s mission, Perseverance will drive off, leaving the little helicopter that could behind, and continue its own mission: to search for signs of past life in Jezero crater, and to store rocks for a future mission to return to Earth.

    in Science News on April 19, 2021 11:06 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How the laws of physics constrain the size of alien raindrops

    Whether they’re made of methane on Saturn’s moon Titan or iron on the exoplanet WASP 76b, alien raindrops behave similarly across the Milky Way. They are always close to the same size, regardless of the liquid they’re made of or the atmosphere they fall in, according to the first generalized physical model of alien rain.

    “You can get raindrops out of lots of things,” says planetary scientist Kaitlyn Loftus of Harvard University, who published new equations for what happens to a falling raindrop after it has left a cloud in the April Journal of Geophysical Research: Planets. Previous studies have looked at rain in specific cases, like the water cycle on Earth or methane rain on Saturn’s moon Titan (SN: 3/12/15). But this is the first study to consider rain made from any liquid.

    “They are proposing something that can be applied to any planet,” says astronomer Tristan Guillot of the Observatory of the Côte d’Azur in Nice, France. “That’s really cool, because this is something that’s needed, really, to understand what’s going on” in the atmospheres of other worlds.

    Comprehending how clouds and precipitation form are important for grasping another world’s climate. Cloud cover can either heat or cool a planet’s surface, and raindrops help transport chemical elements and energy around the atmosphere.

    Clouds are complicated (SN: 3/5/21). Despite lots of data on earthly clouds, scientists don’t really understand how they grow and evolve.

    Raindrops, though, are governed by a few simple physical laws. Falling droplets of liquid tend to default to similar shapes, regardless of the properties of the liquid. The rate at which that droplet evaporates is set by its surface area.

    “This is basically fluid mechanics and thermodynamics, which we understand very well,” Loftus says.

    She and Harvard planetary scientist Robin Wordsworth considered rain in a variety of different forms, including water on early Earth, ancient Mars and a gaseous exoplanet called K2 18b that may host clouds of water vapor (SN: 9/11/19). The pair also considered Titan’s methane rain, ammonia “mushballs” on Jupiter and iron rain on the ultrahot gas giant exoplanet WASP 76b (SN: 3/11/20). “All these different condensables behave similarly, [because] they’re governed by similar equations,” she says.

    The team found that worlds with higher gravity tend to produce smaller raindrops. Still, all the raindrops studied fall within a fairly narrow size range, from about a tenth of a millimeter to a few millimeters in radius. Much bigger than that, and raindrops break apart as they fall, Loftus and Wordsworth found. Much smaller, and they’ll evaporate before hitting the ground (for planets that have a solid surface), keeping their moisture in the atmosphere.

    Eventually the researchers would like to extend the study to solid precipitation like snowflakes and hail, although the math there will be more complicated. “That adage that every snowflake is unique is true,” Loftus says.

    The work is a first step toward understanding precipitation in general, says astronomer Björn Benneke of the University of Montreal, who discovered water vapor in the atmosphere of K2 18b but was not involved in the new study. “That’s what we are all striving for,” he says. “To develop a kind of global understanding of how atmospheres and planets work, and not just be completely Earth-centric.”

    in Science News on April 19, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What can neuroscience tell us about the mind of a serial killer?

    Serial killers—people who repeatedly murder others—provoke revulsion but also a certain amount of fascination in the general public. But what can modern psychology and neuroscience tell us about what might be going on inside the head of such individuals?

    Serial killers characteristically lack empathy for others, coupled with an apparent absence of guilt about their actions. At the same time, many can be superficially charming, allowing them to lure potential victims into their web of destruction. One explanation for such cognitive dissonance is that serial killers are individuals in whom two minds co-exist—one a rational self, able to successfully navigate the intricacies of acceptable social behaviour and even charm and seduce, the other a far more sinister self, capable of the most unspeakable and violent acts against others. This view has been a powerful stimulus in fictional portrayals ranging from Dr Jekyll and Mr Hyde, to Hitchcock’s Psycho, and a more recent film, Split. Yet there is little evidence that real-life serial killers suffer from dissociative identity disorder (DID), in which an individual has two or more personalities cohabiting in their mind, apparently unaware of each other.

    Instead, DID is a condition more associated with victims, rather than perpetrators, of abuse, who adopt multiple personalities as a way of coming to terms with the horrors they have encountered. Of course a perpetrator of abuse may also be a victim, and many serial killers were abused as children, but in general they appear not to be split personalities, but rather people conscious of their acts. Despite this, there is surely a dichotomy in the minds of such individuals perhaps best personified by US killer Ted Bundy, who was a “charming, handsome, successful individual [yet also] a sadist, necrophile, rapist, and murderer with zero remorse who took pride in his ability to successfully kill and evade capture.”

    “a recent brain imaging study … showed that criminal psychopaths had decreased connectivity between … a brain region that processes negative stimuli and those that give rise to fearful reactions”

    One puzzling aspect of serial killers’ minds is the fact that they appear to lack—or can override—the emotional responses that in other people allows us to identify the pain and suffering of other humans as similar to our own, and empathise with that suffering. A possible explanation of this deficit was identified in a recent brain imaging study. This showed that criminal psychopaths had decreased connectivity between the amygdala—a brain region that processes negative stimuli and those that give rise to fearful reactions—and the prefrontal cortex, which interprets responses from the amygdala. When connectivity between these two regions is low, processing of negative stimuli in the amygdala does not translate into any strongly felt negative emotions. This may explain why criminal psychopaths do not feel guilty about their actions, or sad when their victims suffer.

    Yet serial killers also seem to possess an enhanced emotional drive that leads to an urge to hurt and kill other human beings. This apparent contradiction in emotional responses still needs to be explained at a neurological level. At the same time, we should not ignore social influences as important factors in the development of such contradictory impulses. It seems possible that serial killers have somehow learned to view their victims as purely an object to be abused, or even an assembly of unconnected parts. This might explain why some killers have sex with dead victims, or even turn their bodies into objects of utility or decoration, but it does not explain why they seem so driven to hurt and kill their victims. One explanation for the latter phenomenon is that many serial killers are insecure individuals who feel compelled to kill due to a morbid fear of rejection. In many cases, the fear of rejection seems to result from having been abandoned or abused by a parent. Such fear may compel a fledgling serial killer to want to eliminate any objects of their affections. They may come to believe that by destroying the person they desire, they can eliminate the possibility of being abandoned, humiliated, or otherwise hurt, as they were in childhood.

    Serial killers also appear to lack a sense of social conscience. Through our parents, siblings, teachers, peers, and other individuals who influence us as we grow up, we learn to distinguish right from wrong. It is this that inhibits us from engaging in anti-social behaviour. Yet serial killers seem to feel they are exempt from the most important social sanction of all—not taking another person’s life. For instance, Richard Ramirez, named the “Night Stalker” by the media, claimed at his trial that “you don’t understand me. You are not expected to. You are not capable of it. I am beyond your experience. I am beyond good and evil … I don’t believe in the hypocritical, moralistic dogma of this so-called civilized society.” 

    It remains far from clear why a few people react to abuse or trauma at an earlier stage in their lives by later becoming a serial killer. But hopefully new insights into the psychological or neurological basis of their actions may in the future help us to identify potential future such killers and dissuade them from committing such horrendous crimes.

    Featured image via Pixabay

    The post What can neuroscience tell us about the mind of a serial killer? appeared first on OUPblog.

    in OUPblog - Psychology and Neuroscience on April 19, 2021 09:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Xiongbin Lu nanobombs cancer with fluorescent nude mouse recycling

    "IU School of Medicine scientists discover 'game-changer' treatment for triple negative breast cancer"

    in For Better Science on April 19, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2-Minute Neuroscience: Chronic Traumatic Encephalopathy (CTE)

    Chronic traumatic encephalopathy, or CTE, is a neurological condition linked primarily to repetitive head trauma. In this video, I discuss what happens in the brain during CTE.

    in Neuroscientifically Challenged on April 18, 2021 12:16 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: ‘The Damage Campaign;’ timber industry retracts comments, apologizes; COVID-19 vaccine study conflicts disclosure

    Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance. The week at Retraction Watch featured: Palmitoleic acid paper pulled for data concerns Pharma company demands … Continue reading Weekend reads: ‘The Damage Campaign;’ timber industry retracts comments, apologizes; COVID-19 vaccine study conflicts disclosure

    in Retraction watch on April 17, 2021 12:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Is early vision like a convolutional neural net?

    Early convolutional neural net (CNNs) architectures like the Neocognitron, LeNet and HMAX were inspired by the brain. But how much like the brain are modern CNNs? I made a pretty strong claim on Twitter a few weeks ago that the early visual processing is nothing like a CNN:

    In typical Twitter fashion, my statement was a little over-the-top, and it deserved a takedown. What followed was a long thread of replies from Blake Richards, Grace Lindsay, Dileep George, Simon Kornblith, members of Jim DiCarlo’s lab, and even Yann LeCun. The different perspectives are interesting, and I’m curating them here.

    My position, in brief, is that weight sharing and feature maps – the defining characteristics of modern CNNs – are crude approximations of the topography of the early visual cortex and its localized connection structure. Whether this distinction matters in practice depends on what you want your models to accomplish – I argue that for naturalistic vision, it can matter a lot.

    Background

    There are several defining characteristics of modern CNNs:

    • Hierarchical processing
    • Selectivity operations (i.e the ReLU)
    • Pooling operations
    • Localized receptive fields
    • Feature maps
    • Weight sharing

    Where did these ideas come from?

    Hubel and Wiesel (1962) described simple and complex cells in the cat primary visual cortex. Simple cells respond best to a correctly oriented bar or edge. Complex cells, on the other hand, are not sensitive to the sign of the contrast of the bar or edge, or its exact location. They hypothesized that simple cells are generated by aggregating the correct LGN afferents selective for light and dark spots, followed by the threshold nonlinearity of the cells. Complex cells were hypothesized to be generated by pooling from similarly tuned simple cells. They later expanded these ideas: perhaps first-order and second-order hypercomplex cells are created by the repetition of the same pattern. Thus, hierarchy, selectivity, pooling and localized receptive fields were all hypothesized to be taking place.

    Adapted from (Hubel and Wiesel, 1962). | Download Scientific DiagramAnnotated figures from Hubel & Wiesel (1962)

    Fukushima (1980) adapted many of these ideas in a self-organized hierarchical neural network called the Neocognitron. In the S-layers, neurons weight the input from previous layers via localized receptive fields, followed by a ReLU nonlinearity. In C-layers, inputs from the previous are pooled linearly, followed by a sigmoid.

    From Fukushima (1980)

    The Neocognitron also introduces what may be the defining features of modern CNNs: parallel feature maps (here called planes) and weight sharing. This is most clearly seen in the selectivity operation for S-layers, which is a convolution over the set of local coordinates S_l, followed by a ReLU:

    From Fukushima (1980)

    The reason behind the introduction of parallel feature maps and weight sharing is not very clear in the original paper, and in fact in section 3 the paper casts doubt on how realistic these assumptions are as a model of vision:

    One of the basic hypotheses employed in the neocognitron is the assumption that all the S-cells in the same S-plane have input synapses of the same spatial distribution, and that only the positions of the presynaptic cells shift in parallel in accordance with the shift in position of individual S-cell’s receptive fields. It is not known whether modifiable synapses in the real nervous system are actually self-organized always keeping such conditions. Even if it is assumed to be true, neither do we know by what mechanism such a self-organization goes on.

    So why introduce these features at all?

    Topography conquers all

    Fukushima however links localized receptive fields with the idea of localized topography:

    […] orderly synaptic connections are formed between retina and optic tectum not only in the development in the embryo but also in regeneration in the adult amphibian or fish

    This is one of the enduring facts of visual neuroscience: inputs from the retina are preserved topologically as they make their way to the LGN, V1, and onwards to extrastriate cortex. Visual angle and log radius are mapped from the retina to V1 in development, guided by chemical gradients. The progression of the maps reverse as we go up the visual hierarchy (in fact, areas are defined by this very reversal in direction). What’s remarkable is how precise the topography is: Hubel and Wiesel estimated that in primary visual cortex, neighbouring cells varied in receptive field center by less than half a receptive field.

    FIG. 1.From Ringach (2004)

    There’s another fact of life in cortex, which is that horizontal wiring is expensive. If a neuron integrates from localized inputs, and those inputs are retinotopically organized, that neuron will have a localized receptive field. If you assume that the synaptic pattern is localized and random then for every cell, there must be other similarly tuned cells somewhere else in the visual field. My postdoc advisor Dario Ringach introduced a model (2004) for how simple cells in primary visual cortex can occur from such sparse localized connections.

    That covers feature maps. However, if the input statistics to primary visual cortex are stationary, then a self-supervised criterion could refine a randomly initialized V1 feature map, effectively letting the input do the weight tying – a subtle point brought up by Blake Richards.

    Non-stationarities at the large scale

    I hope to have convinced you that the link between CNNs, first introduced by Fukushima, and early visual processing in the brain is quite subtle. Whether a CNN is a good or a bad model of early visual processing depends on what phenomenon we care to model. If we’re talking about core visual recognition in the parafovea – which is what the majority of the quantification comes from – the match is quite good. At larger spatial scales, however, I would argue that the match is compromised by two facts.

    First, spatial resolution is not constant as a function of space. A CNN on a Cartesian grid doesn’t scale with eccentricity. This issue has received some attention lately, and foveating CNNs are starting to be considered – see this recent paper from Arturo Deza and Talia Konkle for example.

    Spatial frequency tuning correlates with spatial tuning. From Arcaro and Livingstone (2017)

    Secondly, the eyes foveate towards interesting targets – e.g. faces – which mean that statistics are highly nonstationary different as a function of space. The ground looks very different from the sky. Hands tend to be in the lower half of the visual field, and numerous visual areas have only partial maps of the visual field. There’s overrepresentation of radial orientations, more curvature selectivity in the fovea, no blue cones in the fovea, etc.

    Here’s a picture to illustrate this. If VGG19 is a good model of primary visual visual cortex, then it follows that maximizing predicted activity in primary visual cortex could be done by maximizing an unweighted sum of VGG19 subunits, a la DeepDream. If you do that for layer 7 of VGG19, you get the picture on the left. However, we can use fMRI to estimate a matrix, via ridge regression that maps VGG19 to an fMRI subspace, and then optimize a weighted sum of VGG19 subunits, giving us the picture on the right. This image highlights some of the known spatial biases in primary visual cortex – shifts in prefered spatial frequency as a function of eccentricity, radial bias, curvature bias in the fovea – which are not apparent in an unweighted VGG19. So at the very least, the brain’s image manifold is rotated and rescaled compared to the VGG19 image manifold.

    Margaret Livingstone has a very interesting line of research (talk here) highlighting the close link between spatial biases and the development of object and face recognition modules in high-level visual cortex. More generally, if we’re interested in how vision is organized for action – ecological vision in the tradition of Gibson – then whether a network has captured, e.g. the fact that the bottom half of the visual field has very different affordances than the top half matters. If you applied a unsupervised criterion to learn features from natural images without weight sharing, those features would vary quite a bit across space – unlike in a CNN. This point was also raised by Dileep George.

    Weight sharing is not strictly necessary

    LeCun (1989) demonstrates the use of backpropagation to train a convolutional image recognition network. There’s a beautiful figure that highlights the effect of adding more and more constraints, in particular in going from a fully-connected 2-layer network (net2) to a high-capacity convolutional neural net (net5):

    From LeCun (1989)

    LeCun (1989) clarifies that weight sharing is a clever trick that decreases the number of weights to be learned, which in the small data limit is practically important to obtain good test set accuracy. We are a long ways from 1989 in terms of dataset size, and unsupervised learning could learn untied feature maps, as pointed out by Yann LeCun in the Twitter thread. But would currently existing benchmarks favor ANNs-with-local-receptive-field-but-untied-weights?

    The leaderboard is the message

    Let’s turn back to the claim that early visual cortex is like a particular CNN. As Wu, David and Gallant (2006) put it:

    an important long-term goal of sensory neuroscience [is] developing models of sensory systems that accurately predict neuronal responses under completely natural conditions.

    Brain-Score is an embodiment of this idea – benchmark, across many different datasets and ventral stream areas, the ability of many different CNNs to explain brain activity. The leaderboard is pretty clear that the VGG19 architecture outperform alternatives in explaining average firing rate – even when these alternatives are better at classifying images in ImageNet. That’s an interesting finding – VGG19 is shallower and has larger receptive fields than more modern CNNs, so perhaps its performance on this benchmark is because of that.

    Grace Lindsay pointed out that:

    […] when the results were first coming out that trained convolutional neural networks are good predictors of activity in the visual system some people had the attitude of “that’s not interesting because obviously anything that does vision well will look like the brain”

    What’s interesting is that we now have architectures that solve object recognition which are very different from CNNs, namely visual transformers (ViT). They are architecturally different from brains, and they while they perform well on ImageNet, they underperform in predicting single neuron responses. So now we have a clear example of a disconnect between ImageNet performance and similarity to the brain, and that strengthens the claim that core object recognition is like a CNN.

    So on the one hand, the data from BrainScore says that CNNs are the best models of core object recognition, yet there are pretty stark ways in which early visual processing is unlike a CNN, some of which I’ve mentioned already, others which have been highlighted in Grace Lindsay’s excellent review.

    Would Brain-Score, in theory, rate better a model which respects the distinction between excitation and inhibition (Dale’s law)? That seems unlikely, since there are no cell type annotations in the datasets. Would it rate better a foveated model, or one that doesn’t have weight sharing? Again, unlikely, since as Tiago Marques points out, for technical reasons, most datasets are taken in the parafovea. In any case, the metric used to score the models is rotation invariant, so it wouldn’t be able to tell these cases apart. As Simon Kornblith points out, choices of metrics are not falsifiable. The right approach for choosing metrics is the axiomatic one, as demonstrated in Simon’s CKA paper: the modeller or the community decides what the right metric is based off of design criterias that represent what they think is interesting about the data.

    Does that mean we should ignore Brain-Score? No! I really like Brain-Score – I wish there were many more Brain-Score-like leaderboards! Comparing on lots of datasets prevents cherry picking – there is robustness in the meta-analytic approach. What I’m excited about is the possibility of Brain-Scores – a constellation of leaderboards that benchmark different models according to rules which match on modeler’s interests. I’ve been involved in proto-community-benchmark efforts like the neural prediction challenge and spikefinder before, and the technological barriers to run such a leaderboard have been lowered by commoditized cloud infra. The value proposition is also becoming clearer, and I am excited to see more of these efforts pop up.

    Conclusion

    In brief:

    • Convolutional neural nets assume shared weights
    • This assumption is not valid at large spatial scales
    • Large spatial scales matter if you care about naturalistic foveated vision
    • It’s a useful assumption in the low-data regime, which we’re not in
    • The current benchmarks are not sensitive to this issue
    • It’s possible and desirable to create new benchmarks which are sensitive to this, and I hope to do that in the future!

    in xcorr.net on April 16, 2021 06:52 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    People with rare blood clots after a COVID-19 jab share an uncommon immune response

    Evidence is building that an uncommon immune response is behind dangerous, but incredibly rare, blood clots associated with some COVID-19 vaccines. But the good news is that there is a test doctors can use to identify it and get patients the right care.

    A small number of people out of the millions vaccinated with AstraZeneca’s or Johnson & Johnson’s COVID-19 shots have developed severe blood clots, such as ones in the sinuses that drain blood from the brain (SN: 4/7/21; 4/13/21). A few have died.

    Studies suggest that some inoculated people develop an immune response that attacks a protein called platelet factor 4 or PF4, which makes platelets form clots. Those platelets get used up before the body can make more. So these patients wind up with both the rare clots and low levels of blood platelets.

    Of 23 patients who received AstraZeneca’s jab and had symptoms of clots or low platelets, 21 tested positive for antibodies to PF4, researchers report April 16 in the New England Journal of Medicine. Of those, 20 people developed blood clots. The finding adds to previous studies that found the same antibodies in additional patients who got AstraZeneca’s shot and had the dangerous clots.  

    Five out of six women who had clots after receiving Johnson & Johnson’s shot in the United States also had PF4 antibodies, health officials said April 14 during an Advisory Committee on Immunization Practices meeting. That advisory group to the U.S. Centers for Disease Control and Prevention is assessing what needs to be done to lift a temporary pause on administering the Johnson & Johnson jab that was prompted by blood clot concerns (SN: 4/13/21). One man had developed brain sinus clots during the shot’s clinical trial and a seventh case is under investigation, the pharmaceutical company said during the meeting.

    “Because we are aware of this syndrome… we know how to treat it,” says Jean Connors, a clinical hematologist at Harvard Medical School and Brigham and Women’s Hospital in Boston who was not involved in the studies. And unlike the people who developed the clots before officials pinpointed the link, “we can diagnose it faster and treat it more appropriately if it does happen, so that the outcomes will be better.”

    That’s because the vaccine-induced clots are similar to a condition called heparin-induced thrombocytopenia, or HIT. Patients with HIT develop blood clots when treated with the commonly used anti-coagulant drug heparin. Heparin attaches to the PF4 protein, and some people develop an immune response that attacks the two molecules.   

    Treating vaccinated patients who have PF4 antibodies with heparin is like “adding fuel to the fire” and may cause them to develop more clots, Connors says. Four of the six U.S. women who developed clots after getting the Johnson & Johnson vaccine, for instance, received heparin, as did the man in the clinical trial. The man recovered and one woman was discharged from the hospital. Three were still hospitalized as of April 14.

    Health care workers can test for PF4. And if a patient tests positive, there are many other anti-coagulants other than heparin that clinicians can use for treatment, Connors says.

    in Science News on April 16, 2021 05:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A new book explores how military funding shaped the science of oceanography

    cover of Science On A Mission

    Science on a Mission
    Naomi Oreskes
    Univ. of Chicago, $40

    In 2004, Japanese scientists captured the first underwater images of a live giant squid, a near-mythical, deep-ocean creature whose only interactions with humans had been via fishing nets or beaches where the animals lay dead or dying.

    Getting such a glimpse could have come much sooner. In 1965, marine scientist Frederick Aldrich had proposed studying these behemoths of the abyss using Alvin, a submersible funded by the U.S. Navy and operated by the Woods Hole Oceanographic Institution in Massachusetts. During the Cold War, however, studying sea life was not a top priority for the Navy, the main funder of U.S. marine research. Instead, the Navy urgently needed information about the terrain of its new theater of war and a thorough understanding of the medium through which submarines traveled.

    In Science on a Mission, science historian Naomi Oreskes explores how naval funding revolutionized our understanding of earth and ocean science — especially plate tectonics and deep ocean circulation. She also investigates the repercussions of the military’s influence on what we still don’t know about the ocean.

    The book begins just before World War II, when the influx of military dollars began. Oreskes describes how major science advances germinated and weaves those accounts with deeply researched stories of backstabbing colleagues, attempted coups at oceanographic institutions and daring deep-sea adventures. The story flows into the tumult of the 1970s, when naval funding began to dry up and scientists scrambled to find new backers. Oreskes ends with oceanography’s recent struggles to align its goals not with the military, but with climate science and marine biology.

    Each chapter could stand alone, but the book is best consumed as a web of stories about a group of people (mostly men, Oreskes notes), each of whom played a role in the history of oceanography. Oreskes uses these stories to explore the question of what difference it makes who pays for science. “Many scientists would say none at all,” she writes. She argues otherwise, demonstrating that naval backing led scientists to view the ocean as the Navy did — as a place where men, machines and sound travel. This perspective led oceanographers to ask questions in the context of what the Navy needed to know.

    One example Oreskes threads through the book is bathymetry. With the Navy’s support, scientists discovered seamounts and mapped mid-ocean ridges and trenches in detail. “The Navy did not care why there were ridges and escarpments; it simply needed to know, for navigational and other purposes, where they were,” she writes. But uncovering these features helped scientists move toward the idea that Earth’s outer layer is divided into discrete, moving tectonic plates (SN: 1/16/21, p. 16).

    Through the lens of naval necessity, scientists also learned that deep ocean waters move and mix. That was the only way to explain the thermocline, a zone of rapidly decreasing temperature that separates warm surface waters from the frigid deep ocean, which affected naval sonar. Scientists knew that acoustic transmissions depend on water density, which, in the ocean, depends on temperature and salinity. What scientists discovered was that density differences coupled with Earth’s rotation drive deep ocean currents that take cold water to warm climes and vice versa, which in turn create the thermocline.

    Unquestionably, naval funding illuminated physical aspects of the ocean. Yet many oceanographers failed to recognize that the ocean is also an “abode of life.” The Alvin’s inaugural years in the 1960s focused on salvage, acoustics research and other naval needs until other funding agencies stepped in. That switch facilitated startling discoveries of hydrothermal vents and gardens of life in the pitch black of the deep ocean.

    As dependence on the Navy lessened, many Cold War scientists and their trainees struggled to reorient their research. For instance, their view of the ocean, largely driven by acoustics and ignorant of how sound affects marine life, led to public backlash against studies that could harm sea creatures.

    “Every history of science is a history both of knowledge produced and of ignorance sustained,” Oreskes writes. “The impact of underwater sound on marine life,” she says, “was a domain of ignorance.”


    Buy Science on a Mission from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a commission on purchases made from links in this article.

    in Science News on April 16, 2021 03:02 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Hand Gestures And Sexist Language: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    It’s wrong to say that introverts have fared better during the pandemic, writes Lis Ku at The Conversation. Instead, studies have shown that in many ways introverts’ wellbeing has suffered more than that of extraverts. This could be because extraverts may have more social support, for instance, or because extraversion is related to superior coping strategies — although Ku emphasises that there are likely many other traits, beliefs and values that are also important in determining people’s response to lockdown.


    A new trial suggests that the psychedelic drug psilocybin could be as good as existing antidepressant drugs at treating depression, when it is combined with psychotherapy. However, researchers caution that more work with larger, more diverse samples is needed before the drug could be used outside of a research setting, reports Nicola Davis at The Guardian.


    ­When the queen of a colony of Indian jumping ants dies, the worker ants compete to become the new queen. Now researchers have discovered that in the process, these would-be-queens shrink their brains by about 20%. The team suspects that this is because there are fewer cognitive demands placed on an ant whose primary job is simply to reproduce, reports Annie Roth at The New York Times, so energy that would be going to the workers’ brains is better spent on the reproductive system instead.


    Why are older, single women labelled with the negative term “spinster”, while the only word for an unmarried man is the much more neutral “bachelor”?  At BBC Future, Sophia Smith Galer delves into the words we use to discuss men and women — and asks whether language simply reflects the sexism of society, or might actually perpetuate those biases.


    Parents often worry that social media is having negative effects on children’s mental health. But what does the science say? Well, there’s not much evidence either way, writes Andrew Przybylski at BBC Science Focus, because social media companies don’t share their data with researchers. But given that social media may be an important outlet for many young people, well-intentioned efforts to intervene could end up backfiring.


    Can boosting kids’ “grit” help them to achieve academic success? It’s a popular idea, but one without a huge amount of evidence behind it, writes Jesse Singal at Nautilus, in an extract from his new book.


    Making hand gestures during a lesson can help people learn abstract concepts. In a recent study, participants watched an animated lesson about statistical models and then completed a test. Those who had made hand gestures to imitate parts of the lesson subsequently performed better on the test than those who had made no gestures, or those who had made gestures that were inconsistent with the lesson. Matthew Hutson explains the work at Scientific American.

    Compiled by Matthew Warren (@MattBWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on April 16, 2021 01:43 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    International representation in Neurology journals: no improvement in over a decade

    Click here to read the full study published in BMC Medical Research Methodology.

    Biomedical and public health research should reflect the diversity of global health and the impact that social, cultural, and geographical factors can have on local health practice and policy. Until now, no studies have examined the contribution of developing countries to authors, editors or research in high-ranking neurology journals.

    The need for research in neurology cannot be understated. Globally, neurological disorders are the leading cause of disability, yet low and middle-income countries bear almost 80% of the burden of neurological disorders. As the population continues to grow, the degree of burden will continue to rise.

    Authorship and editorial board representation from developing countries in neurology journals is exceedingly rare.

    To assess representation in neurology journals, we conducted a cross-sectional study of all research articles published in 2010 and 2019 in the five highest ranked peer-reviewed neurology journals: The Lancet Neurology, Acta Neuropathologica, Nature Reviews Neurology, Brain, and Annals of Neurology. Using this data, we determined the extent of contributions of authors, editors, and research from developing countries, as well as the degree of international research collaboration between developed and developing countries.

    We found that authorship and editorial board representation from developing countries in neurology journals is exceedingly rare, and this has not changed in the past decade. First authorship was attributed to authors from developing countries in only 2% of research articles in 2010 and 3% in 2019. The lack of representation in research extends to the editorial boards of the selected journals, none of which had a board member from a developing country. Unsurprisingly, the primary data of these publications originated largely from developed countries with advanced research facilities, namely the United States, United Kingdom, and Germany.

    National and international research bodies are well placed to reduce this disparity.

    Tackling underrepresentation in research is no simple feat. Nevertheless, our results highlight that there is an urgent need for strategies to support high-quality locally-driven biomedical research in developing countries. Local researchers in developing countries benefit from exposure to greater research opportunities, education, and training. This is beneficial to developing countries as they are able to direct socially and culturally relevant research that is readily applicable to local healthcare systems.

    National and international research bodies are well placed to reduce this disparity through greater representation via international collaborations which strengthen the quality of research in developing countries. By fostering high-quality and culturally relevant research, local healthcare systems are able to readily apply these findings to meet the neurological needs of their population.

    The post International representation in Neurology journals: no improvement in over a decade appeared first on BMC Series blog.

    in BMC Series blog on April 16, 2021 01:42 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    50 years ago, the United States wanted to deflate the helium stockpile

    cover of the April 17, 1971 issue

    Helium: Should it be conserved?Science News, April 17, 1971

    To avoid some of the loss and provide a stockpile against future needs, the United States Government, in the late 1950s, established a helium conservation program.… Under the program the Bureau of Mines contracts with certain natural gas producers to extract helium and store it in underground chambers. Now the users and extractors of helium are fighting a decision to end that program.

    Update

    To the relief of balloons everywhere, the Federal Helium Reserve survived. Arguments to shutter the facility, located in Texas, centered around declining use. But the element is often used in scientific research and is now crucial for smartphone manufacturing and MRI machines. Global demand is high, and users have faced numerous shortages. In 2016, the discovery of a helium gas deposit under Tanzania temporarily eased concerns of the world’s supply running dry (SN: 7/23/16, p. 14). Still, the U.S. government has long wanted to float away from the helium game and plans to close the reserve later this year.

    in Science News on April 16, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Palmitoleic acid paper pulled for data concerns

    A journal has retracted the 2014 report of a clinical trial of a supplement touted as a way to reduce the risk of cardiovascular disease after beginning to suspect that the data were not reliable.  The study, “Purified palmitoleic acid for the reduction of high-sensitivity C-reactive protein and serum lipids: A double-blinded, randomized, placebo controlled … Continue reading Palmitoleic acid paper pulled for data concerns

    in Retraction watch on April 16, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Schneider Shorts 16.04.2021

    Introducing new For Better Science weekly format: Schneider Shorts

    in For Better Science on April 16, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Neandertal DNA from cave mud shows two waves of migration across Eurasia

    Neandertal DNA recovered from cave mud reveals that these ancient humans spread across Eurasia in two different waves.

    Analysis of genetic material from three caves in two countries suggests an early wave of Neandertals about 135,000 years ago may have been replaced by genetically and potentially anatomically distinct successors 30,000 years later, researchers report April 15 in Science. The timing of this later wave suggests potential links to climate and environmental shifts.

    By extracting genetic material from mud, “we can get human DNA from people who lived in a cave without having to find their remains, and we can learn interesting things about those people from that DNA,” says Benjamin Vernot, a population geneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.

    A few years ago, scientists showed that it’s possible to extract prehistoric human DNA from dirt, which contains genetic material left behind by our ancestors from skin flakes, hair or dried excrement or bodily fluids such as sweat or blood. Genetic analysis of ancient sediments could therefore yield valuable insights on human evolution, given that ancient human fossils with enough DNA suitable for analysis are exceedingly rare (SN: 6/26/19).

    Until now, the ancient human DNA analyzed from sediments came from mitochondria — the organelles that act as energy factories in our cells — not the chromosomes in cell nuclei, which contain the actual genetic instructions for building and regulating the body. Although chromosomes hold far more information, retrieving samples of this nuclear DNA from caves proved challenging because of its relative scarcity. A human cell often possesses thousands of copies of its mitochondrial genome for every one set of chromosomes, and the vast majority of any DNA found in ancient dirt belongs to other animals and to microbes.

    To extract ancient human chromosomal DNA from caves, Vernot and colleagues identified regions in chromosomes rich in mutations specific to hominids to help the team filter out nonhuman DNA. This helped the researchers successfully analyze Neandertal chromosomal DNA from more than 150 samples of sediment roughly 50,000 to 200,000 years old from a cave in Spain and two caves in Siberia.

    After the team compared its data with DNA previously collected from Neandertal fossils of about the same age, the findings suggested that all these Neandertals were split into two genetically distinct waves that both dispersed across Eurasia. One emerged about 135,000 years ago, while the other arose roughly 105,000 years ago, with one branch of the earlier wave giving rise to all the later groups examined.

    In the Spanish cave, the researchers found genetic evidence of both groups, with the later wave apparently replacing the earlier one. “There were signs based on the mitochondrial DNA of this turnover, but seeing it clearly with the nuclear DNA is really exciting,” says paleogeneticist Qiaomei Fu at the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing, who did not take part in this study.

    The later wave may be linked with the emergence of the last “classic” stage of Neandertal anatomy, skeletal features such as a bulge at the back of the skull that may indicate strong neck muscles or enlarged brain regions linked to vision, the researchers say. This later wave may have coincided with cooling and other environmental changes that came with the advent of the last ice age, they note.

    This research emphasizes how scientists working at potential Neandertal sites should not throw away dirt as is traditionally done, says paleogeneticist Carles Lalueza-Fox at the Institute of Evolutionary Biology in Barcelona, who did not take part in this study. Instead, he says, special protocols may be needed to avoid contaminating these areas with modern DNA.

    in Science News on April 15, 2021 06:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Pharma company demands retraction, damages in lawsuit against journal

    A drug company that manufactures a painkiller used for surgery patients has sued an anesthesiology journal along with its editor and publisher and the authors of articles that it says denigrated its product unfairly. In a complaint filed yesterday in U.S. District Court in New Jersey, Pacira Biosciences claims that “In the February 2021 issue … Continue reading Pharma company demands retraction, damages in lawsuit against journal

    in Retraction watch on April 15, 2021 05:13 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Earth sweeps up 5,200 tons of extraterrestrial dust each year

    As our planet orbits the sun, it swoops through clouds of extraterrestrial dust — and several thousand metric tons of that material actually reaches Earth’s surface every year, new research suggests.

    During three summers in Antarctica over the past two decades, researchers collected more than 2,000 micrometeorites from three snow pits that they’d dug. Extrapolating from this meager sample to the rest of the world, tiny pebbles from space account for a whopping 5,200 metric tons of weight gain each year, researchers report in the April 15 Earth and Planetary Science Letters.

    Much of Antarctica is the perfect repository for micrometeorites because there’s no liquid water to dissolve or otherwise destroy them, says Jean Duprat, a cosmochemist at Sorbonne University in Paris (SN: 5/29/20). Nevertheless, collecting the samples was no easy chore.

    First, Duprat and colleagues had to dig down two meters or more to reach layers of snow deposited before 1995, the year when researchers set up a field station at an inland site dubbed Dome C. Then they used ultraclean tools to collect hundreds of kilograms of snow, melt it and sieve the tiny treasures from the frigid water.

    trench in AntarcticaTo hunt for micrometeorites that have fallen to Antarctica in recent decades, researchers dig trenches (pictured) to collect snow that is later melted and then sieved for the space dust.J. Duprat, C. Engrand, CNRS Photothèque

    In all, the team found 808 spherules that had partially melted as they blazed through Earth’s atmosphere and another 1,280 micrometeorites that showed no such damage. The particles ranged in size from 30 to 350 micrometers across and all together weigh mere fractions of a gram. But the micrometeorites were all found within three areas totaling just a few square meters, the merest fraction of Earth’s surface. Assuming that particles of space dust are just as likely to fall in Antarctica as anywhere else let the team estimate how much dust fell over the entire planet.

    The team’s findings “are a wonderful complement to previous studies,” says Susan Taylor, a geologist at the Cold Regions Research and Engineering Laboratory in Hanover, N.H., who was not involved in the new study. That’s because Duprat and colleagues found a lot of the small stuff that would have dissolved elsewhere, she notes.

    About 80 percent of the micrometeorites originate from comets that spend much of their orbits closer to the sun than Jupiter, the researchers estimate. Much of the rest probably derive from collisions of objects in the asteroid belt. All together, these tiny particles deliver somewhere between 20 and 100 metric tons of carbon to Earth each year, Duprat and colleagues suggest, and could have been an important source of carbon-rich compounds such as amino acids early in Earth’s history (SN: 12/4/20).

    in Science News on April 15, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    You want to do what? Paper on anal swabs for COVID-19 retracted for ethical issues

    An article claiming that anal swabs can be used to detect SARS-CoV-2 in patients cured of Covid-19 has been retracted after the journal found that the authors failed to get permission from the patients to conduct the study.  To be clear: We’re not sure if the researchers — from Weihai Municipal Hospital, in Shandong, China … Continue reading You want to do what? Paper on anal swabs for COVID-19 retracted for ethical issues

    in Retraction watch on April 15, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Good Time Management Seems To Have A Bigger Impact On Wellbeing Than Work Performance

    By Emily Reynolds

    As our lives have become busier, desire to do things quickly and efficiently has grown — something the rise of speed reading apps, lack of break-taking at work, and a general focus on “productivity” has shown. Good time management skills, therefore, are now highly prized both at work and at home.

    But do such techniques actually work? In a meta-analysis published in PLOS One, Brad Aeon from Concordia University and colleagues find that they do — but perhaps not for the reasons you’d expect. While time management skills have become more important in evaluations of job performance since the 1990s, their biggest impact lies elsewhere: in personal wellbeing.

    Time management, in brief, is a decision-making framework that helps us structure, protect and adapt our time in changing circumstances. It can therefore be measured through questions like “do you have a daily routine?”, “do you find it hard to say no to people?” and “do you evaluate your daily schedule?”. Work-life balance and attitudes to time and time management are also key.

    To explore the efficacy of time management, the team collated 158 papers from the mid-1980s to 2019 in journals in business, computing, gender studies, psychology, sociology and education; papers that included scales or questionnaires on time management were also included. (Interestingly, time management studies became more popular between 2000 and the 2010s, suggesting a wider trend and interest in the topic.)

    These studies included work on time management in academia and the workplace, individual differences in time management, and its impact on wellbeing factors such as life satisfaction, anxiety, depression, and positive and negative affect.

    By looking at the effects across all of these studies, the team found that time management has a moderate, positive impact on work performance, both in terms of performance appraisal by managers and factors like motivation and involvement with work. The relationship between time management and job performance became stronger over the years the studies were published, another suggestion that time management has become a more important factor in people’s lives. This link was not as strong in academic settings — time management seemed to be less relevant to tests scores or grades than it was to performance reviews at work.

    Most individual differences were only weakly associated with time management skills: women have stronger time management skills than men, for instance, but this correlation was weak. Women’s time management skills did grow over the timeline of the meta-analysis, however perhaps a sign of more busy schedules and increased juggling of tasks.

    Despite narratives that suggest time management is primarily a work or career-based skill, the strongest link was between good time management and wellbeing: the effect of time management on life satisfaction was 72% stronger than on job satisfaction. Time management also reduced feelings of distress.

    Overall the findings suggest that time management does work — though contrary to popular belief, it is wellbeing that is the most positively impacted factor, not work. Work and wellbeing are clearly linked — if you’re having a horrible time at work your life satisfaction is unlikely to be too high. But the results could mean that wellbeing is not simply a byproduct of a successfully managed work life but can be a direct result of good time management.

    You may not want to put too much stock in time management alone, however: being good at time management, the team argues, is often a function of privilege. Things like income, class and education all influence how we are able to manage our time. They use the meme-ified phrase that “you have as many hours in the day as Beyoncé” to illustrate their point: while technically true, Beyoncé has a full team of nannies, drivers, chefs, personal trainers and more to manage her time and thus has the hours of their day as well as hers. 

    Those without such resources are unlikely to achieve as much as someone who does, and shaming them for lack of attainment is unhelpful. And though time management skills may be invaluable in a busy life, it’s also useful to remember the words of Dr Maria Kordowicz, writing in November’s issue of The Psychologist: you are more than your productivity.

    Does time management work? A meta-analysis

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on April 15, 2021 09:09 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Only 3 percent of Earth’s land hasn’t been marred by humans

    The Serengeti looks largely like it did hundreds of years ago.

    Lions, hyenas and other top predators still stalk herds of wildebeests over a million strong, preventing them from eating too much vegetation. This diversity of trees and grasses support scores of other species, from vivid green-orange Fischer’s lovebirds to dung beetles. In turn, such species carry seeds or pollen across the plains, enabling plant reproduction. Humans are there too, but in relatively low densities. Overall, it’s a prime example of what biologists call an ecologically intact ecosystem: a bustling tangle of complex relationships that together sustain a rich diversity of life, undiminished by us.

    Such places are vanishingly rare.

    The vast majority of land on Earth — a staggering 97 percent — no longer qualifies as ecologically intact, according to a sweeping survey of Earth’s ecosystems. Over the last 500 years, too many species have been lost, or their numbers reduced, researchers report April 15 in Frontiers in Forests and Global Change.

    Of the few fully intact ecosystems, only about 11 percent fall within existing protected areas, the researchers found. Much of this pristine habitat exists in northern latitudes, in Canada’s boreal forests or Greenland’s tundra, which aren’t bursting with biodiversity. But chunks of the species-rich rainforests of the Amazon, Congo and Indonesia also remain intact.

    “These are the best of the best, the last places on Earth that haven’t lost a single species that we know of,” says Oscar Venter, a conservation scientist at the University of Northern British Columbia in Prince George who wasn’t involved in the study. Identifying such places is crucial, he says, especially for regions under threat of development that require protection, like the Amazon rainforest.

    Conservation scientists have long tried to map how much of the planet remains undegraded by human activity. Previous estimates using satellite imagery or raw demographic data found anywhere from 20 to 40 percent of the globe was free from obvious human incursions, such as roads, light pollution or the gaping scars of deforestation. But an intact forest canopy can hide an emptied-out ecosystem below.

    “Hunting, the impacts of invasive species, climate change — these can harm ecosystems, but they can’t be easily sensed via satellite,” says conservation biologist Andrew Plumptre of the University of Cambridge. A Serengeti with fewer lions or hyenas — or none at all — may look intact from space, but it’s missing key species that help the whole ecosystem run.

    What exactly constitutes a fully intact and functioning ecosystem is fuzzy and debated by ecologists, but Plumptre and his colleagues started by looking for habitats that retained their full retinue of species, at their natural abundance as of A.D. 1500. That’s the baseline the International Union for the Conservation of Nature uses to assess species extinctions, even though humans have been altering ecosystems by wiping out big mammals for thousands of years (SN: 8/26/15).

    Large swaths of land are necessary to support wide-ranging species. So the researchers initially considered only areas larger than 10,000 square kilometers, roughly the size of Puerto Rico. The team combined existing datasets on habitat intactness with three different assessments of where species have been lost, encompassing about 7,500 animal species. While 28.4 percent of land areas larger than 10,000 square kilometers is relatively free from human disturbance, only 2.9 percent holds all the species it did 500 years ago. Shrinking the minimum size of the area included to 1,000 square kilometers bumps the percentage up, but barely, to 3.4.

    Simply retaining species isn’t enough for ecological intactness, since diminished numbers of key players could throw the system out of whack. The researchers tallied up the population densities of just over a dozen large mammals whose collective ranges span much of the globe, including gorillas, bears and lions. This is a narrow look, Plumptre concedes, but large mammals play important ecological roles. They also have the best historical data and are also often the first to be affected by human incursion. Factoring in declines in large mammals only slightly decreased the percentage of ecologically intact land, down to 2.8 percent.

    Overall the tally of ecologically intact land “was much lower than we were expecting,” says Plumptre. “Going in, I’d guessed that it would be 8 to 10 percent. It just shows how huge an impact we’ve had.”

    Both Venter and Jedediah Brodie, a conservation ecologist at the University of Montana in Missoula, question whether the authors were too strict in their definition of ecological intactness.

    “Many ecosystems around the world have lost one or two species but are still vibrant, diverse communities,” Brodie says. A decline in a few species may not spell disaster for the whole ecosystem, since other species may swoop in to fill those roles.

    Still, the study is a valuable first look that shows us “where the world looks like it did 500 years ago and gives us something to aim for,” Plumptre says. It also identifies areas ripe for restoration. While only 3 percent of land is currently ecologically intact, the introduction of up to five lost species could restore 20 percent of land to its former glory, the researchers calculate. 

    Species reintroductions have worked well in places like Yellowstone National Park, where the restoration of wolves has put the ecosystem back into balance (SN: 7/21/20). Such schemes may not work everywhere. But as the global community discusses how to protect nature over the next decade (SN: 4/22/20), Plumptre hopes this study will prompt policy makers to “not just protect the land that’s there, but also think about restoring it to what it could be.”

    in Science News on April 15, 2021 05:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The P.1 coronavirus variant is twice as transmissible as earlier strains

    The P.1 coronavirus variant first identified in Brazil may be twice as transmissible as earlier strains and may evade up to nearly half of immune defenses built during previous infections, a new study suggests.

    According to data collected in Manaus, Brazil, P.1 probably arose in mid-November 2020 in the city, researchers report April 14 in Science. The variant quickly rose to prominence there and spread to the rest of Brazil and at least 37 other countries, including the United States.

    Earlier examinations of the variant’s genetic makeup have shown that P.1 contains many differences from earlier strains, including 10 amino acid changes in the spike protein, which helps the virus infect cells. Three of those spike protein changes are of concern because they are the same mutations that allow other worrisome variants to bind more tightly to human proteins or to evade antibodies (SN: 2/5/21). Simulations of P.1’s properties suggest that the variant is 1.7 to 2.4 times more transmissible than the previous SARS-CoV-2 strain. It is not clear whether that increase in transmissibility is because people produce more of the virus or have longer infections.

    Some studies have hinted that people who previously had COVID-19 can get infected with P.1. The new study suggests that people who had earlier infections have about 54 percent to 79 percent of the protection against P.1 as they do against other local strains. That partial immunity may leave people vulnerable to reinfection with the variant.

    Whether the virus makes people sicker or is more deadly than other strains is not clear. The researchers estimate that coronavirus infections were 1.2 to 1.9 times more likely to result in death after P.1 emerged than before. But Manaus’ health care system has been under strain, so the increase in deaths may be due to overburdened hospitals.

    in Science News on April 14, 2021 10:57 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘Monkeydactyl’ may be the oldest known creature with opposable thumbs

    Future Jurassic Park films could feature one weird new beast in the menagerie: a pterosaur nicknamed Monkeydactyl for its opposable thumbs.

    This flying reptile from the Jurassic Period may be the earliest known animal that could touch the insides of its thumbs to the insides of its other fingers, researchers report online April 12 in Current Biology. Such dexterity probably allowed Monkeydactyl to climb trees about 160 million years ago, perhaps to feed on insects and other prey that nonclimbing pterosaurs did not, the researchers say (SN: 12/21/18). The latter half of the creature’s official name, Kunpengopterus antipollicatus, comes from the words “opposite” and “thumb” in ancient Greek.

    Monkeydactyl’s fossilized remains, unearthed in northeastern China in 2019, are embedded in rock. So the team used micro-CT scanning to create a 3-D rendering of the fossil. “With this detail, we’re able to look at the fossil from any angle, and make sure that the bones are in their right [original] place,” says study coauthor Rodrigo Pêgas, a paleontologist at the Federal University of ABC in São Bernardo do Campo, Brazil.

    fossilized hand of 'Monkeydactyl'Monkeydactyl’s fossilized hand shows the opposable thumb (pictured, topmost finger) facing the opposite direction of its other clawed fingers.X. Zhou et al/Current Biology 2021
    fossilized hand of 'Monkeydactyl'Monkeydactyl’s fossilized hand shows the opposable thumb (pictured, topmost finger) facing the opposite direction of its other clawed fingers.X. Zhou <em>et al</em>/<em>Current Biology</em> 2021

    Those scans helped confirm that the skeleton had a well-preserved opposable thumb on each hand. “Almost all of the modern animals that have opposable thumbs use them to climb trees,” Pêgas says, including primates and some tree frogs. That evidence, along with the apparent flexibility of Monkeydactyl’s joints, suggests this species was well suited to clambering through tree branches.

    in Science News on April 14, 2021 02:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A coronavirus epidemic may have hit East Asia about 25,000 years ago

    An ancient coronavirus, or a closely related pathogen, triggered an epidemic among ancestors of present-day East Asians roughly 25,000 years ago, a new study indicates.

    Analysis of DNA from more than 2,000 people shows that genetic changes in response to that persistent epidemic accumulated over the next 20,000 years or so, David Enard, an evolutionary geneticist at the University of Arizona in Tucson, reported April 8 at the virtual annual meeting of the American Association of Physical Anthropologists. The finding raises the possibility that some East Asians today have inherited biological adaptations to coronaviruses or closely related viruses.

    The discovery opens the way to exploring how genes linked to ancient viral epidemics may contribute to modern disease outbreaks, such as the COVID-19 pandemic. Genes with ancient viral histories might also provide clues to researchers searching for better antiviral drugs, although that remains to be demonstrated.

    Enard’s group consulted a publicly available DNA database of 2,504 individuals from 26 ethnic populations on five continents, including Chinese Dai, Vietnamese Kinh and African Yoruba people. The team first focused on 420 proteins known to interact with coronaviruses, including 332 that interact with SARS-CoV-2, the virus that causes COVID-19. These interactions could range from boosting immune responses to making it easier for a virus to hijack a cell.

    Substantially increased production of all 420 proteins, a sign of past exposures to coronavirus-like epidemics, appeared only in East Asians. Enard’s group traced the viral responses of 42 of those proteins back to roughly 25,000 years ago.

    An analysis of the genes known to orchestrate production of those proteins determined that specific variants became more common around 25,000 years ago before leveling off in frequency by around 5,000 years ago. That pattern is consistent with an initially vigorous genetic response to a virus that waned over time, either as East Asians adapted to the virus or as the virus lost its ability to cause disease, Enard said. Twenty-one of the 42 gene variants act either to enhance or deter the effects of a wide array of viruses, not just coronaviruses, suggesting that an unknown virus that happened to exploit similar proteins as coronaviruses could have instigated the ancient epidemic, Enard said.

    These findings “show that East Asians have been exposed to coronavirus-like epidemics for a long time and are more [genetically] adapted to epidemics of these viruses,” says evolutionary geneticist Lluis Quintana-Murci of the Pasteur Institute in Paris, who was not involved in the new study.

    It’s possible that DNA adjustments to coronavirus epidemics over many thousands of years may contribute to lower COVID-19 infection and death rates reported in East Asian nations, versus European countries and the United States, Quintana-Murci speculates. But it’s unknown at this point what, if any, effect those DNA tweaks could have. Many factors, including jobs that can’t be done remotely and lack of health care access, drive COVID-19 infections, he says (SN: 11/11/20; SN 7/2/20). And social factors, such as quick, strict lockdowns and widespread mask wearing, may have deterred infections in some East Asian nations.

    Large-scale genetic studies in modern East Asians and probes of ancient human DNA spanning the past 25,000 years are needed to explore how the 42 identified gene variants may contribute to COVID-19 or other coronavirus infections. Those variants may also present opportunities for developing COVID-19 treatments, Enard said. So far, though, only four of those genes are targets of 11 drugs being used or investigated in studies of COVID-19 treatments, he said.

    Enard’s findings follow related evidence that a set of inherited Neandertal gene variants raise the risk of developing severe COVID-19 in some South Asians and Europeans, while others may provide some level of protection (SN: 10/2/20; SN: 2/17/21).

    in Science News on April 14, 2021 02:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Organization of Cell Assemblies in the Hippocampus

    This week on Journal Club session Emil Dmitruk will talk about a paper "Organization of Cell Assemblies in the Hippocampus" and will briefly presnt how this subject can be approached with topological data analysis.


    Neurons can produce action potentials with high temporal precision. A fundamental issue is whether, and how, this capability is used in information processing. According to the "cell assembly" hypothesis, transient synchrony of anatomically distributed groups of neurons underlies processing of both external sensory input and internal cognitive mechanisms. Accordingly, neuron populations should be arranged into groups whose synchrony exceeds that predicted by common modulation by sensory input. Here we find that the spike times of hippocampal pyramidal cells can be predicted more accurately by using the spike times of simultaneously recorded neurons in addition to the animals location in space. This improvement remained when the spatial prediction was refined with a spatially dependent theta phase modulation. The time window in which spike times are best predicted from simultaneous peer activity is 10-30,ms, suggesting that cell assemblies are synchronized at this timescale. Because this temporal window matches the membrane time constant of pyramidal neurons, the period of the hippocampal gamma oscillation and the time window for synaptic plasticity, we propose that cooperative activity at this timescale is optimal for information transmission and storage in cortical circuits.


    Papers:

    Date: 2021/04/16
    Time: 14:00
    Location: online

    in UH Biocomputation group on April 14, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘First Steps’ shows how bipedalism led humans down a strange evolutionary path

    First Steps
    Jeremy DeSilva
    Harper, $27.99

    No other animal moves the way we do. That’s awfully strange. Even among other two-legged species, none amble about with a straight back and a gait that, technically, is just a form of controlled falling. Our bipedalism doesn’t just set us apart, paleoanthropologist Jeremy DeSilva posits; it’s what makes us human.

    There’s no shortage of books that propose this or that feature — tool use or self-awareness, for example — as the very definition of humankind. But much of our supposed uniqueness doesn’t stand up to this tradition. In First Steps, DeSilva takes a slightly different approach. Our way of walking, he argues, set off an array of consequences that inform our peculiar evolutionary history.

    DeSilva starts his tour through the annals of bipedalism with other upright organisms. Tyrannosaurus and ancient crocodile relatives are trotted out to show how they moved on two legs, thanks to long, counterbalancing tails (SN: 6/12/20). DeSilva stumbles a little here, like arguing that “bipedalism was not a successful locomotion for many dinosaur lineages.” An entire group — the theropods — walked on two legs and still do in their avian guises. But the comparison with dinosaurs is still worthwhile. With no tail, the way we walk is even stranger. “Let’s face it,” DeSilva writes, “humans are weird.”

    Each following chapter gets more surefooted as DeSilva guides readers through what we’ve come to know about how our ancestors came to be bipedal. This is breezy popular science at its best, interweaving anecdotes from the field and lab with scientific findings and the occasional pop culture reference. DeSilva gets extra credit for naming oft-overlooked experts who made key discoveries.

    Instead of presenting a march of progress toward ever-greater bipedal perfection, DeSilva highlights how our ancestors had varied forms of upright walking, such as the somewhat knock-kneed gait of Australopithecus sediba (SN: 7/25/13). The way we now walk, he argues, was one evolutionary pathway among many possibilities.

    But walking upright opened up unique evolutionary avenues, DeSilva notes. Freed from locomotion, our arms and hands could become defter at creating and manipulating tools. Our ancestors also evolved a bowl-shaped pelvis to comfortably cradle our viscera. But this arrangement made giving birth more complicated, especially as human infants began to have larger heads that needed to pass through a narrowed birth canal created by this anatomical shift. Such trade-offs, including how debilitating twisted ankles and broken bones can be to humans, may have required our ancestors to care for each other, DeSilva concludes. While that may be a step too far into speculation, he nevertheless makes a compelling case overall. “Our bipedal locomotion was a gateway to many of the unique traits that make us human,” he writes, an evolutionary happenstance that formed the context for how we came to be.


    Buy First Steps from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a commission on purchases made from links in this article.

    in Science News on April 14, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    STEM’s racial, ethnic and gender gaps are still strikingly large

    Efforts to promote equity and inclusion in science, technology, engineering and math have a long way to go, a new report suggests.

    Over the last year, widespread protests in response to the police killings of George Floyd, Breonna Taylor and other unarmed Black people have sparked calls for racial justice in STEM. Social media movements such as #BlackinSTEM have drawn attention to discrimination faced by Black students and professionals, and the Strike for Black Lives challenged the scientific community to build a more just, antiracist research environment (SN: 12/16/20).

    An analysis released in early April of federal education and employment data from recent years highlights how wide the racial, ethnic and gender gaps in STEM representation are. “This has been an ongoing conversation in the science community” for decades, says Cary Funk, the director of science and society research at the Pew Research Center in Washington, D.C. Because the most recent data come from 2019, Pew’s snapshot of STEM cannot reveal how recent calls for diversity, equity and inclusion may have moved the needle. But here are four big takeaways from existing STEM representation data: 

    Black and Hispanic workers remain underrepresented in STEM jobs.

    From 2017 to 2019, Black professionals made up only 9 percent of STEM workers in the United States — lower than their 11 percent share of the overall U.S. workforce. The representation gap was even larger for Hispanic professionals, who made up only 8 percent of people working in STEM, while they made up 17 percent of the total U.S. workforce. White and Asian professionals, meanwhile, remain overrepresented in STEM.

    Some STEM occupations, such as engineers and architects, skew particularly white. But even fields that include more professionals from marginalized backgrounds do not necessarily boast more supportive environments, notes Jessica Esquivel, a particle physicist at Fermilab in Batavia, Ill., not involved in the research.

    For instance, Black professionals are represented in health care jobs at the same level as they are in the overall workforce, according to the Pew report. But many white people with medical training continue to believe racist medical myths, such as the idea that Black people have thicker skin or feel less pain than white people, reports a 2016 study in the Proceedings of the National Academy of Sciences.

    Current diversity in STEM education mirrors gaps in workforce representation.

    Black and Hispanic students are less likely to earn degrees in STEM than in other fields. For instance, Black students earned 7 percent of bachelor’s degrees in STEM in 2018 (the most recent year with available data) — lower than their 10 percent share of all bachelor’s degrees that year. White and Asian students, on the other hand, are overrepresented among STEM college graduates.

    Black and Hispanic students are also underrepresented among those earning advanced STEM degrees. Since these education stats are similar to employment stats, the study authors see no major shifts in workplace representation in the near future.

    Representation of women varies widely across STEM fields.

    Women make up about half of STEM professionals in the United States — slightly more than their 47 percent share of the overall workforce. From 2017 to 2019, they constituted nearly three-quarters of all health care workers, but were outnumbered by men in the physical sciences, computing and engineering.

    STEM education data do not foreshadow major changes in women’s representation: Women earned a whopping 85 percent of bachelor’s degrees in health-related fields, but a mere 22 percent in engineering and 19 percent in computer science as of 2018.

    There are large pay gaps among STEM workers by gender, race and ethnicity.

    The typical salary from 2017 to 2019 for a woman in STEM was about 74 percent of the typical man’s salary in STEM. That pay gap narrowed from 72 percent in 2016, but was still wider than the pay gap in the overall workforce, where women earned about 80 percent of what men did.

    Racial and ethnic disparities in STEM pay, on the other hand, widened. Black STEM professionals typically earned about 78 percent of white workers’ earnings from 2017 to 2019 — down from 81 percent in 2016. And typical pay for Hispanic professionals in STEM was 83 percent of white workers’ earnings — down from 85 percent in 2016. Meanwhile, Asian STEM professionals’ typical earnings rose from 125 percent of white workers’ pay to 127 percent.

    Looking ahead

    The new Pew results are important but not surprising, says Cato Laurencin, a surgeon and engineer at the University of Connecticut in Farmington. “Why the numbers are where they are, I think, is maybe an even more important discussion.”

    The barriers to entering STEM “are very, very different with every group,” says Laurencin, who chairs the National Academy of Sciences, Engineering and Medicine Roundtable on Black Men and Black Women in Science, Engineering and Medicine. In particular, he says, “Blacks working their way through STEM education and STEM professions really face a gauntlet of adversity.” That runs the gamut from fewer potential STEM role models in school to workplace discrimination (SN: 12/16/20).

    Esquivel, a cofounder of #BlackinPhysics, is optimistic about change. Over the last year, “we’ve realized the power of our voice, and I see us not going back because of that — because we’ve started grassroots movements, like #BlackinPhysics, like all of the #BlackinX networks that popped off this past June,” she says. “These early-career, student-led grassroots movements are keeping the people-in-power’s feet to the fire, and just not backing down. That really does give me hope for the future.”

    in Science News on April 14, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    University in Japan revokes doctorate for plagiarism of text, image

    A researcher in Japan has been stripped of his doctorate after a university investigation found that his thesis contained seven lines of plagiarized text and an image pulled from the internet without attribution. Takuma Hara received his PhD in medical sciences from Tsukuba University in March 2019, writing a thesis about a genetic mutation’s role … Continue reading University in Japan revokes doctorate for plagiarism of text, image

    in Retraction watch on April 14, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Taking stock of the future of work, mid-pandemic

    This past month marked an anniversary like no other. On 11 March 2020, the World Health Organization declared COVID-19 a pandemic and with it, normal life of eating out, commuting to work, and seeing grandparents came to a sudden halt. One year later, my new book about the intersection of psychology and the workplace was published. With wide-scale vaccinations on the rise, I thought it would be a good time to take stock of where we are and just how much has changed.

    Even before the pandemic, major shifts in how employees think about and interact with their workplace were well underway. Routine tasks were increasingly handed over to automation, such that workers could concentrate on more complex or valuable activities. Employees were being shuffled into new environments guided by activity-based working, with neighborhoods and open-plan floor plans replacing traditional cubicles. Gig working was on the rise and self-employed side hustles were becoming the norm.

    COVID-19 added fuel to the fire. During the pandemic’s peak, approximately 30% of employees were working from home, while internet use surged by up to 30% of pre-pandemic levels, as workers competed for bandwidth with their kids who were now thrown into distance learning. Social collaboration tools, like Zoom and Teams, found the footing that they were searching for and quickly became the default means of working with peers and customers.

    Even though employers made swift and effective adjustments, the pandemic unearthed social inequalities that were bubbling under the surface. Whereas over half of information workers in the US (e.g. consultants or managers) had the luxury to stay safe by working from home, only 1 in 20 service workers could do the same. These workers were tied to their physical workplace, making cars, cooking food, or cleaning hospitals, and often represented people of color. They bore the brunt of the sudden lockdown in normal life, having to both put themselves at risk and be left without the tools to work in any other way.

    With sustainable progress towards an end to the pandemic in sight, employers have already made decisions that will have lasting effects on the workplace. Remote working has proven both feasible and effective for even the most skeptical companies, resulting in many employers encouraging hybrid working. Exactly what this means is still up in the air, even for Big Technology. Twitter and Spotify have vowed to make remote working indefinite, while Google requires employees who plan to spend more than 14 days at home annually to apply for special permission. Somewhere in between are Facebook, Salesforce, and Amazon who have all stated support for a hybrid model.

    The offices that employees are returning to will also look and feel different. This past year has given employers the rare opportunity to reassess their real estate portfolio. For example, companies like Target have recently announced a reduction of their office footprint, anticipating lower utilization due to hybrid working. When they do come in, workers will be there to collaborate with peers, access tools and technology, or simply meet up as a team. What is increasingly off the menu is the solitary, heads-down type of work that was typically done in a cubicle.

    “As we enter this new Future of Work, the same psychological tendencies that have always governed our work behavior will undergo a period of adjustment.”

    To date, much of the shift to the Future of Work has been unidimensional, by tackling the physical workplace. What lags behind is a new set of behaviors and social norms that will help guide employees about how and when they should interact with their new workplace.

    For example, video conferencing breaks down the separation between home and work life. Catching a glimpse of a curious four-year-old popping up on screen or spotting laundry that has yet to be put away creates unintended and possibly inappropriate intimacy. Whereas, choices about when to turn on the camera and what hours are truly off-limits have yet to fully work themselves out.

    Just like the tangible office, effective remote working will be made possible by a range of behaviors and social norms that interplay with the motivations, needs, and desires of employees. Wearing a suit while on camera not only looks odd, but plays against our desire for being comfortable at home. There is a competing interest between looking professional and feeling comfortable, with the answer resting somewhere between formalwear and pajamas.

    As we enter this new Future of Work, the same psychological tendencies that have always governed our work behavior will undergo a period of adjustment. Drivers like identity, reward, and obedience are all still in play, but will take on a distinct flavor as we migrate to a new way of working. Building rapport on Zoom is different than sitting down for a meal, just like observing job performance in person benefits from a degree of constant visibility that will go missing with remote work. Following the pandemic, there is little doubt that the workplace will fundamentally change; the question is how and when the psychology of workers will catch up.

    Read a free chapter on “Connection” from Punching the Clock, freely available until 13 May 2021.

     

    Featured image by piranka

    The post Taking stock of the future of work, mid-pandemic appeared first on OUPblog.

    in OUPblog - Psychology and Neuroscience on April 14, 2021 09:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    People Who Identify With Humanity As A Whole Are More Likely To Say They’d Follow Pandemic Guidelines And Help Others

    By Emily Reynolds

    The ever-changing public health measures rolled out during the coronavirus pandemic haven’t always been crystal clear. But several instructions have remained the same throughout: wear a mask, wash your hands, and stay two metres apart.

    Despite the strength and frequency of this messaging, however, the public hasn’t always complied. Though the exact reason for this non-compliance is clearly complex, researchers from the University of Washington have proposed one factor that could influence people’s behaviour: the extent to which they identify with other human beings. Writing in PLOS One, they suggest that a connection with and moral commitment to other humans may be linked to greater willingness to follow COVID-related guidelines.

    Participants were 2537 adults from across the world, based in countries in North and South America, Europe and Asia. First, they were asked how likely they were to comply with the health behaviours recommended by the World Health Organization (WHO) at the start of 2020, when the pandemic begun to spread across the world. The four key behaviours the WHO recommended were thoroughly washing hands, covering the mouth when coughing or sneezing, social distancing, and not touching the face.

    They were then asked how likely they were to engage in four prosocial behaviours: donating masks to a hospital, picking up someone with COVID-19 from the side of the road, going grocery shopping for a family that needed food despite stay-at-home guidelines, and calling an ambulance for someone afflicted with the virus.

    Participants then indicated how strict their country or area’s lockdown was at the time, how available tests were in their area, and how much they perceived themselves at personal risk of contracting the virus. They also reported how much they identified with their own community and their own nation, and finally how much they identified with all humanity (e.g. responding to statements like “how much do you believe in being loyal to all humanity?”).

    Those who identified more strongly with all of humanity were significantly more likely to say they’d engage both in prosocial behaviours, such as donating masks and going grocery shopping for others, and in the WHO health behaviours designed to stop the spread of the virus. Other factors also had an impact on prosocial behaviours — identification with one’s community was strongly linked with several outcomes, for instance. But identification with all of humanity was the only variable that significantly predicted all five outcomes, and had a much larger effect than any of the other variables.

    The results are positive news for those who reject nationalism — participants who identified more closely both with their own community and humanity as a whole were more likely to engage in prosocial and health behaviours than those who strongly identified with their nation alone. As the team puts it, the strongest prosocial behaviour is seen in those who feel connected to “the larger family of humanity” than to a specific nation.

    But how, exactly, does identification with humanity develop? Further research could look at the genesis of such ideas, as well as looking at a wider demographic sample (72% of this sample were university educated, for example). Ascertaining whether certain personality types or traits are linked to such beliefs could also be illuminating.

    It’s also unclear how far these findings go in explaining not just non-compliance but outright rejection of guidelines. Finding out what makes anti-lockdown groups tick may also be necessary when it comes to increasing uptake of public health measures.

    Identifying with all humanity predicts cooperative health behaviors and helpful responding during COVID-19

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on April 14, 2021 08:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Stefano Mancuso’s Nation of Plants: review of two books

    Stefano Mancuso studies neuroscience of plants. I review two of his recent popular science books: "The Incredible Journey of Plants" and "The Nation of Plants".

    in For Better Science on April 14, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Operational Response to US Measles Outbreaks, 2017-19

    Despite widespread coverage with a highly efficacious vaccine, pockets of under vaccinated populations and imported cases can result in sizable measles outbreaks. And even though measles was declared eliminated in the United States more than 20 years ago, the country has faced a series of large measles outbreaks in the years since, particularly over the past decade.

    In our study just published in BMC Public Health, we conducted interviews with individuals from state and local health departments and health systems from across the country who responded to measles outbreaks in 2017-19, representing outbreaks that account for more than 75% of all US measles cases over that period. Our principal aim was to capture firsthand operational experiences in order to generate evidence-based lessons that can inform preparedness activities for future outbreaks of measles and other highly transmissible diseases.

    This study took place under the umbrella of a larger initiative, Outbreak Observatory, which seeks to address the operational challenges and barriers faced during outbreak responses and disseminate lessons to facilitate future preparedness and response efforts. Outbreak responses tend to be busy times that do not lend themselves to documenting operational knowledge acquired during the response. Outbreak Observatory aims to provide a forum to aggregate those lessons so that other jurisdictions can put them into practice without having to learn them firsthand. Previous Outbreak Observatory studies—including on the 2017-18 US hepatitis A epidemic and the impact of Candida auris and the 2017-18 influenza season on US health systems—identified lessons that reach beyond those individual responses to inform broader outbreak and epidemic preparedness. As highlighted by the complexities of the COVID-19 pandemic response, it is critical to share the wealth of operational experience held by frontline responders to improve outbreak response readiness and capacity in advance of future communicable disease events.

    Even for the smaller outbreaks, health departments faced considerable challenges conducting routine outbreak response activities, such as contact tracing

    The participants in our study on US measles outbreaks called attention to a number of resource and operational gaps during their response operations that apply directly to the COVID-19 pandemic. While some of these outbreaks were quite large, none even remotely compared to the scale of the US COVID-19 epidemic. Even for the smaller outbreaks, health departments faced considerable challenges conducting routine outbreak response activities, such as contact tracing. These gaps became front-page news with COVID-19, as health departments around the country struggled to implement effective testing, contact tracing, and other surveillance operations during the pandemic. Additionally, most health departments interviewed in this study indicated that they did not have the resources available to conduct mass vaccination operations in response to the outbreak. Rather, they relied heavily on healthcare providers and pharmacists in the community to administer the vaccinations.

    Public health preparedness funding has decreased steadily over the past decade or longer, and health departments are unable to maintain the programs and personnel required to respond to even minor outbreaks, let alone major epidemics or pandemics.

    As we have observed during the COVID-19 response—first with mass testing clinics and currently with vaccinations—health departments need considerable external resources to stand up large-scale responses for major epidemics. Public health preparedness funding has decreased steadily over the past decade or longer, and health departments are unable to maintain the programs and personnel required to respond to even minor outbreaks, let alone major epidemics or pandemics.

    As with the outbreaks included in this study, many of which occurred largely in insular communities—eg, racial/ethnic minorities, immigrants, orthodox religious communities—COVID-19 has disproportionately impacted vulnerable racial and ethnic minority communities. Public health and healthcare organizations faced significant barriers to encouraging vaccination among the affected communities, and we are currently observing similar challenges as the availability of SARS-CoV-2 vaccines scales up and eligibility groups expand. During the measles outbreaks, health officials often relied on community healthcare providers, particularly those that serve vulnerable and insular communities, to engage with the affected community. Additionally, trusted community leaders—including religious leaders, business owners, and community-based organizations—played a critical role as trusted voices to disseminate accurate information regarding protective measures, including vaccination. It requires substantial time and resources to develop these relationships, however, and it can be very difficult to do so in the midst of an outbreak or epidemic response. Health departments that had previously established relationships (e.g., from previous outbreaks) found it easier to leverage these community leaders to make a positive impact.

    Historically, most outbreak research focuses on the disease epidemiology or clinical care or is limited to after-action reports that focus on organizational challenges and may never be published publicly

    Historically, most outbreak research focuses on the disease epidemiology or clinical care or is limited to after-action reports that focus on organizational challenges and may never be published publicly. It is critical to understand the operational experiences of frontline responders, including public health and healthcare organizations, and to translate those experiences into evidence-based lessons that can inform other preparedness efforts. Without these lessons, the jurisdictions repeat the same mistakes and must learn those lessons firsthand. And the challenges, barriers, and shortcomings identified during smaller outbreaks will only be exacerbated during larger events.

    Efforts to document and disseminate these operational experiences, like Outbreak Observatory and others, support the development of programs and policies that can enable sustainable public health preparedness capacity that is needed for a range of communicable disease events, from smaller outbreaks to larger health emergencies like COVID-19.

    The post Operational Response to US Measles Outbreaks, 2017-19 appeared first on BMC Series blog.

    in BMC Series blog on April 13, 2021 01:37 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    One in six of the papers you cite in a review has been retracted. What do you do?

    The author of a 2014 review article about the role of vitamin D in Parkinson’s disease has alerted readers to the fact that roughly one-sixth of her references have since been retracted. But she and the journal are not retracting the review itself.  The paper, “A review of vitamin D and Parkinson’s disease,” appeared in … Continue reading One in six of the papers you cite in a review has been retracted. What do you do?

    in Retraction watch on April 13, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Episode 24: How Children Learn Through Play

    This is Episode 24 of PsychCrunch, the podcast from the British Psychological Society’s Research Digest, sponsored by Routledge Psychology. Download here.

    What role does play have in child development? In this episode, our presenter Ginny Smith talks to some top play researchers to find out how children learn new skills and concepts through play, and explores what teachers and parents can do to encourage this kind of learning. Ginny also discovers how the Covid-19 pandemic has changed the way kids play and learn.

    Our guests, in order of appearance, are Professor Marilyn Fleer and Dr Prabhat Rai from Monash University, and Dr Suzanne Egan from the University of Limerick.

    Subscribe and download via iTunes.
    Subscribe and download via Stitcher.
    Subscribe and listen on Spotify.

    Episode credits: Presented and produced by Ginny Smith. Script edits by Matthew Warren. Mixing and editing by Jeff Knowler. PsychCrunch theme music by Catherine Loveday and Jeff Knowler. Art work by Tim Grimshaw.

    Background reading for this episode

    The Research Digest and The Psychologist archives have plenty on play, including:

    An over-abundance of toys may stifle toddler creativity
    “Being Fun” Is An Important Marker Of Social Status Among Children
    Fantasy-based pretend play is beneficial to children’s mental abilities
    Mother-toddler play-time is more interactive and educational with old-fashioned toys
    Learning in unexpected places: Elian Fink and Jenny Gibson on the importance of play in early childhood.
    Child play a priority after lockdown: Ella Rhodes reports on an open letter.
    A golden age of play for adults: Dave Neale on a growing yet under-explored area.

    Our sponsors Routledge Psychology are giving PsychCrunch listeners the chance to discover even more ground-breaking research: free access to 5 articles of your choosing from over 4.5 million at tandfonline.com, plus a 20% discount on books at routledge.com.

    Past PsychCrunch episodes:

    Episode one: Dating and Attraction
    Episode two: Breaking Bad Habits
    Episode three: How to Win an Argument
    Episode four: The Psychology of Gift Giving
    Episode five: How To Learn a New Language
    Episode six: How To Be Sarcastic 
    Episode seven: Use Psychology To Compete Like an Olympian.
    Episode eight: Can We Trust Psychological Studies?
    Episode nine: How To Get The Best From Your Team
    Episode ten: How To Stop Procrastinating
    Episode eleven: How to Get a Good Night’s Sleep
    Episode twelve: How To Be Funnier
    Episode thirteen: How to Study and Learn More Effectively
    Episode fourteen: Psychological Tricks To Make Your Cooking Taste Better
    Episode fifteen: Is Mindfulness A Panacea Or Overhyped And Potentially Problematic?
    Bonus episode (sixteen): What’s It Like To Have No Mind’s Eye?
    Episode seventeen: How To Make Running Less Painful And More Fun
    Episode eighteen: How To Boost Your Creativity
    Episode nineteen: Should We Worry About Screen Time?
    Episode twenty: How to cope with pain
    Episode twenty-one: How To Stay Connected In The “New Normal”
    Episode twenty-two: Drifting Minds — Maladaptive Daydreaming And The Hypnagogic State
    Episode twenty-three: Whose Psychology Is It Anyway? Making Psychological Research More Representative 

    in The British Psychological Society - Research Digest on April 13, 2021 07:51 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The intellectual leaps of Joe Shapiro

    Charcoal as COVID-19 therapy? It may sound silly, but there is solid history of data fudging behind it, wandering western blots included!

    in For Better Science on April 13, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The amoral nonsense of Orchid’s embryo selection

    If you haven’t heard about Clubhouse yet… well, it’s the latest Silicon Valley unicorn, and the popular new chat hole for thought leaders. I heard about it for the first time a few months ago, and was kindly offered an invitation (Club house is invitation only!) so I could explore what it is all about. Clubhouse is an app for audio based social networking, and the content is, as far as I can tell, a mixed bag. I’ve listened to a handful of conversations hosted on the app.. topics include everything from bitcoin to Miami. It was interesting, at times, to hear the thoughts and opinions of some of the discussants. On the other hand, there is a lot of superficial rambling on Clubhouse as well. During a conversation about genetics I heard someone posit that biology has a lot to learn from the fashion industry. This was delivered in a “you are hearing something profound” manner, by someone who clearly knew nothing about either biology or the fashion industry, which is really too bad, because the fashion industry is quite interesting and I wouldn’t be surprised at all if biology has something to learn from it. Unfortunately, I never learned what that is.

    One of the regulars on Clubhouse is Noor Siddiqui. You may not have heard of her; in fact she is officially “not notable”. That is to say, she used to have a Wikipedia page but it was deleted on the grounds that there is nothing about her that indicates notability, which is of course notable in and of itself… a paradox that says more about Wikipedia’s gatekeeping than Siddiqui (Russell 1903, Litt 2021). In any case, Siddiqui was recently part of a Clubhouse conversation on “convergence of genomics and reproductive technology” together with Carlos Bustamante (advisor to cryptocurrency based Luna DNA and soon to be professor of business technology at the University of Miami) and Balaji Srinivasan (bitcoin angel investor and entrepreneur). As it happens, Siddiqui is the CEO of a startup called “Orchid Health“, in the genomics and reproductive technology “space”. The company promises to harness “population genetics, statistical modeling, reproductive technologies, and the latest advances in genomic science” to “give parents the option to lower a future child’s genetic risk by creating embryos through in IVF and implanting embryos in the order that can reduce disease risk.” This “product” will be available later this year. Bustamante and Srinivasan are early “operators and investors” in the venture.

    Orchid is not Siddiqui’s first startup. While she doesn’t have a Wikipedia page, she does have a website where she boasts of having (briefly) been a Thiel fellow and, together with her sister, starting a company as a teenager. The idea of the (briefly in existence) startup was apparently to help the now commercially defunct Google Glass gain acceptance by bringing the device to the medical industry. According to Siddiqui, Orchid is also not her first dive into statistical modeling or genomics. She notes on her website that she did “AI and genomics research”, specifically on “deep learning for genomics”. Such training and experience could have been put to good use but…

    Polygenic risk scores and polygenic embryo selection

    Orchid Health claims that it will “safely and naturally, protect your baby from diseases that run in your family” (the slogan “have healthy babies” is prominently displayed on the company’s website). The way it will do this is to utilize “advances in machine learning and artificial intelligence” to screen embryos created through in-vitro fertilization (IVF) for “breast cancer, prostate cancer, heart disease, atrial fibrillation, stroke, type 2 diabetes, type 1 diabetes, inflammatory bowel disease, schizophrenia and Alzheimer’s“. What this means in (a statistical geneticist’s) layman’s terms is that Orchid is planning to use polygenic risk scores derived from genome-wide association studies to perform polygenic embryo selection for complex diseases. This can be easily unpacked because it’s quite a simple proposition, although it’s far from a trivial one- the statistical genetics involved is deep and complicated.

    First, a single-gene disorder is a health problem that is caused by a single mutation in the genome. Examples of such disorders include Tay-Sachs disease, sickle cell anaemia, Huntington’s disease, Duchenne muscular dystrophy, and many other diseases. A “complex disease”, also called a multifactorial disease, is a disease that has a genetic component, but one that involves multiple genes, i.e. it is not a single-gene disorder. Crucially, complex diseases may involve effects of environmental factors, whose role in causing disease may depend on the genetic composition of an individual. The list of diseases on Orchid’s website, including breast cancer, prostate cancer, heart disease, atrial fibrillation, stroke, type 2 diabetes, type 1 diabetes, inflammatory bowel disease, schizophrenia and Alzheimer’s disease are all examples of complex (multifactorial) diseases.

    To identify genes that associate with a complex disease, researchers perform genome-wide association studies (GWAS). In such studies, researchers typically analyze several million genomic sites in a large numbers of individuals with and without a disease (used to be thousands of individuals, nowadays hundreds of thousands or millions) and perform regressions to assess the marginal effect at each locus. I italicized the word associate above, because genome-wide association studies do not, in and of themselves, point to genomic loci that cause disease. Rather, they produce, as output, lists of genomic loci that have varying degrees of association with the disease or trait of interest.

    Polygenic risk scores (PRS), which the Broad Institute claims to have discovered (narrator: they were not discovered at the Broad Institute), are a way to combine the multiple genetic loci associated with a complex disease from a GWAS. Specifically, a PRS \hat{S} for a complex disease is given by

    \hat{S} = \sum_{j=1}^m X_j \hat{\beta}_j,

    where the sum is over m different genetic loci, the X_j are coded genetic markers for an individual at the m loci, and the \hat{\beta}_j are weights based on the marginal effects derived from a GWAS. The concept of a PRS is straightforward, but the details are complicated, in some cases subtle, and generally non-trivial. There is debate over how many genomic loci should be used in computing a polygenic risk score given that the vast majority of marginal effects are very close to zero (Janssens 2019), lots of ongoing research about how to set the weights to account for issues such as bias caused by linkage disequilibrium (Vilhjálmsson et al. 2015, Shin et al. 2017, Newcombe et al. 2019, Ge et al. 2019, Lloyd-Jones et al. 2019, Pattee and Pan 2020, Song et al. 2020), and continuing discussions about the ethics of using polygenic risk scores in the clinic (Lewis and Green 2021).

    While much of the discussion around PRS applications centers on applications such as determining diagnostic testing frequency (Wald and Old 2019), polygenic embryo selection (PES) posits that polygenic risk scores should be taken a step further and evaluated for embryos to be used as a basis for discarding, or selecting, specific embryos for in vitro fertilization implantation. The idea has been widely criticized and critiqued (Karavani et al. 2019). It has been described as unethical, morally repugnant, and concerns about its use for eugenics have been voiced by many. Underlying these criticisms is the fact that the technical issues with PES using PRS are manifold.

    Poor penetrance

    The term “penetrance” for a disease refers to the proportion of individuals with a particular genetic variant that have the disease. Many single-gene disorders have very high penetrance. For example, F508del mutation in the CFTR gene is 100% penetrant for cystic fibrosis. That is, 100% of people who are homozygous for this variant, meaning that both copies of their DNA have a deletion of the phenylalanine amino acid in position 508 of their CFTR gene, will have cystic fibrosis. The vast majority of variants associated with complex diseases have very low penetrance. For example, in schizophrenia, the penetrance of “high risk” de novo copy number variants (in which there are variable copies of DNA at a genomic loci) was found to be between 2% and 7.4% (Vassos et al 2010). The low penetrance at large numbers of variants for complex diseases was precisely the rationale for developing polygenic risk scores in the first place, the idea being that while individual variants yield small effects, perhaps in (linear) combination they can have more predictive power. While it is true that combining variants does yield more predictive power for complex diseases, unfortunately the accuracy is, in absolute terms, very low.

    The reason for low predictive power of PRS is explained well in (Wald and Old 2020) and is illustrated for coronary artery disease (CAD) in (Rotter and Lin 2020):

    The issue is that while the polygenic risk score distribution may indeed be shifted for individuals with a disease, and while this shift may be statistically significant resulting in large odds ratios, i.e. much higher relative risk for individuals with higher PRS, the proportion of individuals in the tail of the distributions who will or won’t develop the disease will greatly affect the predictive power of the PRS. For example, Wald and Old note that PRS for CAD from (Khera et al. 2018) will confer a detection rate of only 15% with a false positive rate of 5%. At a 3% false positive rate the detection rate would be only 10%. This is visible in the figure above, where it is clear that control of the false positive right (i.e. thresholding at the extreme right-hand side with high PRS score) will filter out many (most) affected individuals. The same issue is raised in the excellent review on PES of (Lázaro-Muńoz et al. 2020). The authors explain that “even if a PRS in the top decile for schizophrenia conferred a nearly fivefold increased risk for a given embryo, this would still yield a >95% chance of not developing the disorder.” It is worth noting in this context, that diseases like schizophrenia are not even well defined phenotypically (Mølstrøm et al. 2020), which is another complex matter that is too involved to go into detail here.

    In a recent tweet, Siddiqui describes natural conception as a genetic lottery, and suggests that Orchid Health, by performing PES, can tilt the odds in customers’ favor. To do so the false positive rate must be low, or else too many embryos will be discarded. But a 15% sensitivity is highly problematic considering the risks inherent with IVF in the first place (Kamphuis et al. 2014):

    To be concrete, an odds ratio of 2.8 for cerebral palsy needs to be balanced against the fact that in the Khera et al. study, only 8% of individuals had an odds ratio >3.0 for CAD. Other diseases are even worse, in this sense, than CAD. In atrial fibrillation (one of the diseases on Orchid Health’s list), only 9.3% of the individuals in the top 0.44% of the atrial fibrillation PRS actually had atrial fibrillation (Choi et al 2019).As one starts to think carefully about the practical aspects and tradeoffs in performing PES, other issues, resulting from the low penetrance of complex disease variants, come into play as well. (Lencz et al. 2020) examine these tradeoffs in detail, and conclude that “the differential performance of PES across selection strategies and risk reduction metrics may be difficult to communicate to couples seeking assisted reproductive technologies… These difficulties are expected to exacerbate the already profound ethical issues raised by PES… which include stigmatization, autonomy (including “choice overload”, and equity. In addition, the ever-present specter of eugenics may be especially salient in the context of the LRP (lowest-risk prioritization) strategy.” They go on to “call for urgent deliberations amongst key stakeholders (including researchers, clinicians, and patients) to address governance of PES and for the development of policy statements by professional societies.”

    Pleiotropy predicaments

    I remember a conversation I had with Nicolas Bray several years ago shortly after the exciting discovery of CRISPR/Cas9 for genome editing, on the implications of the technology for improving human health. Nick pointed out that the development of genomics had been curiously “backwards”. Thirty years ago, when human genome sequencing was beginning in earnest, the hope was that with the sequence at hand we would be able to start figuring out the function of genes, and even individual base pairs in the genome. At the time, the human genome project was billed as being able to “help scientists search for genes associated with human disease” and it was imagined that “greater understanding of the genetic errors that cause disease should pave the way for new strategies in diagnosis, therapy, and disease prevention.” Instead, what happened is that genome editing technology has arrived well before we have any idea of what the vast majority of the genome does, let alone the implications of edits to it. Similarly, while the coupling of IVF and genome sequencing makes it possible to select embryos based on genetic variants today, the reality is that we have no idea how the genome functions, or what the vast majority of genes or variants actually do.

    One thing that is known about the genome is that it is chock full of pleiotropy. This is statistical genetics jargon for the fact that variation at a single locus in the genome can affect many traits simultaneously. Whereas one might think naïvely that there are distinct genes affecting individual traits, in reality the genome is a complex web of interactions among its constituent parts, leading to extensive pleiotropy. In some cases pleiotropy can be antagonistic, which means that a genomic variant may simultaneously be harmful and beneficial. A famous example of this is the mutation to the beta globin gene that confers malaria resistance to heterozygotes (individuals with just one of their DNA copies carrying the mutation) and sickle cell anemia to homozygotes (individuals with both copies of their DNA carrying the mutation).

    In the case of complex diseases we don’t really know enough, or anything, about the genome to be able to truly assess pleiotropy risks (or benefits). But there are some worries already. For example, HLA Class II genes are associated with Type I and non-insulin treated Type 2 diabetes (Jacobi et al 2020), Parkinson’s disease (e.g. James and Georgopolous 2020, which also describes an association with dementia) and Alzheimer’s (Wang and Xing 2020). PES that results in selection against the variants associated with these diseases could very well lead to population susceptibility to infectious disease. Having said that, it is worth repeating that we don’t really know if the danger is serious, because we don’t have any idea what the vast majority of the genome does, nor the nature of antagonistic pleiotropy present in it. Almost certainly by selecting for one trait according to PRS, embryos will also be selected for a host of other unknown traits.

    Thus, what can be said is that while Orchid Health is trying to convince potential customers to not “roll the dice“, by ignoring the complexities of pleiotropy and its implications for embryo selection, what the company is actually doing is in fact rolling the dice for its customers (for a fee).

    Population problems

    One of Orchid Health’s selling points is that unlike other tests that “look at 2% of only one partner’s genome…Orchid sequences 100% of both partner’s genomes” resulting in “6 billion data points”. This refers to the “couples report”, which is a companion product of sorts to the polygenic embryo screening. The couples report is assembled by using the sequenced genomes of parents to simulate the genomes of potential babies, each of which is evaluated for PRS’ to provide a range of (PRS based) disease predictions for the couples potential children. Sequencing a whole genome is a lot more expensive that just assessing single nucleotide polymorphisms (SNPs) in a panel. That may be one reason that most direct-to-consumer genetics is based on polymorphism panels rather than sequencing. There is another: the vast majority of variation in the genome occurs at a known polymorphic sites (there are a few million out of the approximately 3 billion base pairs in the genome), and to the extent that a variant might associate with a disease, it is likely that a neighboring common variant, which will be inherited together with the causal one, can serve as a proxy. There are rare variants that have been shown to associate with disease, but whether or not they explain can explain a large fraction of (genetic) disease burden is still an open question (Young 2019). So what has Siddiqui, who touts the benefits of whole-genome sequencing in a recent interview, discovered that others such as 23andme have missed?

    It turns out there is value to whole-genome sequencing for polygenic risk score analysis, but it is when one is performing the genome-wide association studies on which the PRS are based. The reason is a bit subtle, and has to do with differences in genetics between populations. Specifically, as explained in (De La Vega and Bustamante, 2018), variants that associate with a disease in one population may be different than variants that associate with the disease in another population, and whole-genome sequencing across populations can help to mitigate biases that result when restricting to SNP panels. Unfortunately, as De La Vega and Bustamante note, whole-genome sequencing for GWAS “would increase costs by orders of magnitude”. In any case, the value of whole-genome sequencing for PRS lies mainly in identifying relevant variants, not in assessing risk in individuals.

    The issue of population structure affecting PRS unfortunately transcends considerations about whole-genome sequencing. (Curtis 2018) shows that PRS for schizophrenia is more strongly associated with ancestry than with the disease. Specifically, he shows that “The PRS for schizophrenia varied significantly between ancestral groups and was much higher in African than European HapMap subjects. The mean difference between these groups was 10 times as high as the mean difference between European schizophrenia cases and controls. The distributions of scores for African and European subjects hardly overlapped.” The figure from Curtis’ paper showing the distribution of PRS for schizophrenia across populations is displayed below (the three letter codes at the bottom are abbreviations for different population groups; CEU stands for Northern Europeans from Utah and is the lowest).

    The dependence of PRS on population is a problem that is compounded by a general problem with GWAS, namely that Europeans and individuals of European descent have been significantly oversampled in GWAS. Furthermore, even within a single ancestry group, the prediction accuracy of PRS can depend on confounding factors such as socio-economic status (Mostafavi et al. 2020). Practically speaking, the implications for PES are beyond troubling. The PRS scores in the reports customers of Orchid Health may be inaccurate or meaningless due to not only the genetic background or admixture of the parents involved, but also other unaccounted for factors. Embryo selection on the basis of such data becomes worse than just throwing dice, it can potentially lead to unintended consequences in the genomes of the selected embryos. (Martin et al. 2019) show unequivocally that clinical use of polygenic risk scores may exacerbate health disparities.

    People pathos

    The fact that Silicon Valley entrepreneurs are jumping aboard a technically incoherent venture and are willing to set aside serious ethical and moral concerns is not very surprising. See, e.g. Theranos, which was supported by its investors despite concerns being raised about the technical foundations of the company. After a critical story appeared in the Wall Street Journal, the company put out a statement that

    “[Bad stories]…come along when you threaten to change things, seeded by entrenched interests that will do anything to prevent change, but in the end nothing will deter us from making our tests the best and of the highest integrity for the people we serve, and continuing to fight for transformative change in health care.”

    While this did bother a few investors at the time, many stayed the course for a while longer. Siddiqui uses similar language, brushing off criticism by complaining about paternalism in the health care industry and gatekeeping, while stating that

    “We’re in an age of seismic change in biotech – the ability to sequence genomes, the ability to edit genomes, and now the unprecedented ability to impact the health of a future child.”

    Her investors, many of whom got rich from cryptocurrency trading or bitcoin, cheer her on. One of her investors is Brian Armstrong, CEO of Coinbase, who believes “[Orchid is] a step towards where we need to go in medicine.” I think I can understand some of the ego and money incentives of Silicon Valley that drive such sentiment. But one thing that disappoints me is that scientists I personally held in high regard, such as Jan Liphardt (associate professor of Bioengineering at Stanford) who is on the scientific advisory board and Carlos Bustamante (co-author of the paper about population structure associated biases in PRS mentioned above) who is an investor in Orchid Health, have associated themselves with the company. It’s also very disturbing that Anne Wojcicki, the CEO of 23andme whose team of statistical geneticists understand the subtleties of PRS, still went ahead and invested in the company.

    Conclusion

    Orchid Health’s polygenic embryo selection, which it will be offering later this year, is unethical and morally repugnant. My suggestion is to think twice before sending them three years of tax returns to try to get a discount on their product.

    The Bulbophyllum echinolabium orchid. The smell of its flowers has been described as decomposing rot.

    in Bits of DNA on April 12, 2021 08:46 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    We’re Worse At Remembering Exactly What We’ve Given To Friends Than What We’ve Given To Strangers

    By Emma Young

    Let’s say a friend asks you to help them to move house. When deciding how much time you can offer, you might consider how much you’ve helped that particular friend lately (and perhaps how much they’ve helped you). But a new paper in Social Psychology suggests that if that friend is particularly close, you’re likely to have a poorer memory of just how much time you’ve dedicated to helping them. You might offer more help than you would to an acquaintance not just because this friend is closer, but because your brain’s distinction between a close friend and yourself is blurrier.

    The idea that the closer we are to someone, the less clear is the distinction between our mental representations of our own self and their self is supported by various earlier studies, including some that we’ve covered. In the new work, Pinar Uğurlar at the University of Cologne, Germany and colleagues asked three separate groups of participants to divvy up a series of theoretical resources (pizzas, for example, and bitcoin) between themselves and another person, then later recall just how much of each they’d given away.

    In the first study, the theoretical recipient was the person that the participants had identified as being “closest” to them. These participants had also completed something called the Independent Self-Construal Scale. This asks for levels of endorsement of statements such as “I am a unique person separate from others”. The results showed that those with a more distinct self-representation had better recall of how much pizza, and so on, they had chosen to give away.

    In follow-up studies, the team looked for any differences in recall when people were asked to allocate resources to the person closest to them or to someone they had only met once. The researchers found that when people were interacting (in this hypothetical scenario) with a close friend, their recall was significantly poorer than when they were interacting with someone they barely knew.

    The hypothetical nature of the studies is certainly a limitation. Also, the team did not directly investigate perceived levels of self/other overlap or the effect of this on memory. Still, the work does suggest that closeness “can indeed impair people’s memory of their own decisions”.

    There could be a social upside to this, however. Giving away resources (whether that’s time or food, say, or materials) to benefit others carries an immediate cost to an individual. Blurred distinctions between the self and close others could make it easier for a person to make that selfless decision, and so benefit their group — which in the longer term, benefits them. As the team suggests, “not fully separating the self from others on a purely cognitive level may help groups resolved social dilemmas.”

    Interpersonal closeness impairs decision memory.

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on April 12, 2021 01:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The rector who resigned after plagiarizing a student’s PhD thesis

    Lots of good stories are hiding behind retraction notices, and with the flood of retractions — 2,200 just in 2020 — we can’t always keep up. Here’s a story about one 2020 retraction that turns out to involve a rector in Poland who resigned after plagiarizing a student’s PhD thesis. In 2014, Błażej Kochański defended … Continue reading The rector who resigned after plagiarizing a student’s PhD thesis

    in Retraction watch on April 12, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 12 April 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please join us at the next regular Open NeuroFedora team meeting on Monday 12 April at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'
    

    The meeting will be chaired by @bt0dotninja. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on April 12, 2021 08:41 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The End of Twitter

    The grass root revolution continues, but I'm not on Twitter anymore.

    in For Better Science on April 11, 2021 11:48 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: Faked data in psychology; publishing in predatory journals = misconduct?; how scientists take criticism

    Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance. The week at Retraction Watch featured: Seven barred from research after plagiarism, duplications in eleven papers … Continue reading Weekend reads: Faked data in psychology; publishing in predatory journals = misconduct?; how scientists take criticism

    in Retraction watch on April 10, 2021 11:29 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Reader/Principal Lecturer in Computer Science (Artificial Intelligence)

    School of Physics Engineering and Computer Science/ Department of Computer Science

    University of Hertfordshire, Hatfield, UK

    FTE: Full time position working 37 hours per week (1.0 FTE)

    Duration of Contract: Permanent

    Salary: UH9 £51,034 - £60,905 pa dependent on relevant skills and experience

    Closing date: 9 May 2021


    Applications are invited for an academic position as Reader/Principal Lecturer in the Department of Computer Science, University of Hertfordshire. The Department has an international reputation for teaching and research, with 64 academic staff, 40 adjunct lecturer staff, and 65 research students and postdoctoral research staff. With a history going back to 1958, the Department teaches one of the largest cohorts of undergraduate students in the UK, and also delivers a thriving online computer science degree programme.


    Main duties and responsibilities

    The person appointed will be expected to make a significant contribution to the leadership of research in the department, including gaining research awards as Principal Investigator, the development of the research environment in the department and across the University, and publishing in peer reviewed journal articles and other internationally excellent or world-leading publications in education. To contribute to the development of, and supervise and teach on, doctoral programmes in the UK and internationally in relation to a wide spectrum of AI, especially emerging topics in AI. Possible fields include, but are not limited to:

    • Machine learning: reinforcement learning, Deep Methods, statistical methods, large scale data modelling/intelligent processing and high-performance learning algorithms
    • Robotics: embodied and/or cognitive robotics, HRI, robot safety, emotional/social robots, smart homes and sensors, sensor fusion, assistive robotics, soft robotics, adaptive or evolutionary robotic design
    • Biological and biophysical computation paradigms, systems biology, neural computation
    • Complex Systems: collective intelligence, adaptive, autonomous and multi-agent/robot systems, collective and swarm intelligence, social and market modelling, adaptive, evolutionary and unconventional computation
    • Mathematical Modelling: statistical modelling, information-theoretic methods, compressive sensing, intelligent data visualization, multiscale models, optimization; causality
    • Emerging Topics in AI: computer algebra and AI, topological methods (e.g. persistent homology), algebraic and category-theoretical methods in AI; modern topics in games and AI; quantum algorithms for AI
    • AI and applications: financial modelling, AI and biology/physics/cognitive sciences
    • Foundations: fundamental questions of intelligence and computation, emergence of life/intelligence, Artificial Life

    Preference will be given to candidates that can deliver teaching to Level 7 in a selection of relevant subjects.

    The appointee will also be expected to lead and develop taught modules in a range of computer science areas.  For appointees with the appropriate experience, there will be the possibility of taking up the role of Head of Subject Group within the department of Computer Science.

    Skills and experience

    The appointee will strengthen the research culture in the Department by pursuing research as part of a larger research team, seeking external funding, publishing papers, supervising research students, and participating in commercial activity as appropriate. Therefore it is essential that candidates have a track record (e.g. in published, grant-funded research) in Computer Science. Additionally, experience of different types of assessment and higher education quality assurance is an essential requirement of this role.

    Prior experience of developing modules and/or programmes of study in Computer Science is essential in addition to significant experience of operating in a UK HE Environment, or equivalent professional experience. Readers/Principal lecturers are expected to take on duties in the capacity of leader, and hence experience of academic leader, programme leadership and line management is desirable. Good interpersonal and presentation skills with proficiency in the English Language are essential along with the ability to manage conflicting demands and work to deadlines.

    Qualifications required

    Reader/Principal Lecturer applicants must hold a First Degree and a PhD in an appropriate area of Computer Science or an equivalent, relevant postgraduate professional qualifications.

    In addition, the Reader/Principal Lecturer will be expected to contribute to the leadership and management academic programmes, as well as proactive participation in enterprise, knowledge transfer and/or research and scholarship in the School.  There are expectations of leadership and potentially supervisory oversight of groups of staff. Readers/Principal lecturers are also expected to contribute to the richness of the academic environment, through scholarly activity, support events, projects and activities, including open days, outreach, extra curricula initiatives, and potentially act as a representative of the School or University at national or international fora.

    The School of Physics, Engineering and Computer Science is an Athena Swan Bronze award holder, and we are committed to providing a supportive environment and flexible working arrangements. The university also provides an onsite childcare facility and child-centred holiday clubs. Staff work with the university values, which are: Friendly, Ambitious, Collegial, Enterprising, and Student focused.

    Contact Details/Informal Enquiries: Informal enquiries may be addressed to Dr Simon Trainis, Head, Department of Computer Science by email: S.A.Trainis [at] herts.ac.uk Please note that applications sent directly to this email address will not be accepted.

    Closing Date: 9 May 2021

    Interview Dates: TBC but candidates are advised to be available on 16 and 17 June 2021

    Apply through https://www.herts.ac.uk/staff/careers-at-herts, Reference Number: 032595

    Date Advert Placed: 8 April 2021

    in UH Biocomputation group on April 09, 2021 09:56 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Anesthesiologist loses 50 more papers in 12 months

    A decade has passed since the breaking of the scandal involving Joachim Boldt, a world-renowned critical care specialist who has held steady as the number two author on the Retraction Watch leaderboard. But the case continues to produce developments that have dramatically increased Boldt’s retraction tally.  Journals have retracted at least 53 papers by Boldt … Continue reading Anesthesiologist loses 50 more papers in 12 months

    in Retraction watch on April 09, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Spotting Liars And Fixing Things: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    You might have heard of the “Mozart effect”, the idea that playing babies classical music can boost their intelligence. But is there any truth to that claim? In a word, no — but check out this nice video from Claudia Hammond at BBC Reel to learn more about where the myth came from.


    Studies have found that both male and female observers — including healthcare professionals — underestimate the amount of pain that women are experiencing. We may overestimate men’s pain as well, and there’s some evidence that these gender biases even extend to beliefs about children’s pain. But more work is needed to understand perceptions of pain beyond the standard pool of White, western participants, writes Amanda C de C Williams at The Conversation.


    It’s a common belief that you can spot a liar by the way they act. But research suggests this isn’t really the case: we are no good at deciding whether someone is lying based on their nonverbal behaviour. And while psychologists have found that there are other, better ways of probing the truth of a suspect’s story — such as specific interview techniques — many police forces and border security officials still rely on the old, ineffective methods, writes Jessica Seigel at BBC Future.


    The Psychological Science Accelerator could offer a new model for conducting psychology research, potentially helping the field to get past its replication crisis. The group has already launched several large, international, pre-registered studies, both replications and new work. But is the model sustainable? Brian Resnick takes a look at Vox.


    We tend to think we should fix something by adding more stuff to it. That’s according to series of studies in which participants were presented with various scenarios like improving a travel itinerary or essay: most people tended to add destinations or words, rather than removing them. But the work also suggests that prompts and opportunities for practice can make people more likely to find “subtractive” solutions instead, writes John Timmer at Ars Technica.


    A US panel has concluded that the use of brain organoids in research is ethical, reports Jocelyn Kaiser at Science. The committee, set up by the National Academies of Sciences, Engineering, and Medicine, found that it is “extremely unlikely” that in the near future brain organoids will be conscious, and that there is no need for any new form of oversight for work using these “mini-brains”.


    How does growing up in poverty affect brain development? A number of studies have found that children from an impoverished background show certain differences in brain structure, as we wrote in 2019. But this data is largely correlational — so now, researchers are looking at how reducing poverty by providing regular cash payments actually affects cognition and brain development, reports Alla Katsnelson at the New York Times.

    Compiled by Matthew Warren (@MattBWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on April 09, 2021 09:17 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Get your predatory publishing into PubMed!

    How to make an academic career in medicine, a guide for white men and their wives.

    in For Better Science on April 09, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dealing with measles outbreaks in areas of high anti-vaccination sentiment

    Managing the spread of infectious diseases has taken on heightened significance over the last year, as COVID-19 swept the world. Terms like ‘contact tracing’ and ‘vaccine trials’ have become hot discussion topics for the general population, not just those who work in public health or medical research. But the research I want to share here was research conducted before coronavirus was even ‘a thing’.

    Back in 2019, we set out to uncover what the top priorities are when dealing with a measles outbreak, particularly in a region where there might be lower than average vaccination rates. We conducted a multi-round survey (an interactive method known as a Delphi survey) with a range of Australian health professionals who are responsible for managing and responding to a disease outbreak: public health officials, infectious disease experts, immunization program staff, and others involved in delivering vaccinations.

    Our goal was to find out what they see as the priority issues when trying to contain an outbreak. We asked our participants to imagine a measles outbreak in a hypothetical region known for its low levels of childhood vaccination coverage. We asked them what practical issues and challenges they would face, and what they would need from non-vaccinating members of the community to effectively manage the outbreak.

    The study we undertook is part of a broader project called UPCaV for short – ‘Understanding, Parenting, Communities and Vaccination’. This project has been examining vaccine refusal and hesitancy in Australia from a range of different perspectives.

    Not all parents who refuse vaccines for their children are “anti-vaxxers”

    An earlier phase of the project involved in-depth interviews with Australian parents who refuse some or all vaccines for their children. Some of the findings from that phase have been published by Wiley et al (2020). Parents reported feeling stigmatized, bullied, disrespected and misunderstood. Notably, not all parents who refuse vaccines for their children are “anti-vaxxers”, a label that has become increasingly used in media reporting and one that tends to close down conversation rather than open it up.

    Trying to combat misinformation, while considered important by the people we surveyed, was deemed a longer-term strategy

    One thing that stood out the most for me from our Delphi survey, was that trying to combat misinformation, while considered important by the people we surveyed, was deemed a longer-term strategy. Trying to change the minds of vaccine-refusers was not considered a priority during an outbreak, nor was offering vaccination. Participants’ first priority was to contact potentially infected people and isolate vulnerable members of the community in order to halt the spread of the disease.

    Another thing I found compelling in our findings, were the responses to a question about what strategies are most successful for countering mistrust among non-vaccinating parents during an outbreak. Recurring responses to this question were the need for patience and calm education. Panellists suggested it was important to highlight to parents the serious complications that can arise from measles. And they suggested that it is important not to get into arguments with non-vaccinating members of the community, but instead to offer reassurance, provide accurate information and to acknowledge different beliefs. These comments were exciting to see because they align so well with previous research I’ve been involved with, called Sharing Knowledge About Immunisation (SKAI).

    The following two quotes from participants struck a chord with me:

    “Reassurance, patience, education may help with those people that are still ‘sitting on the fence’ about vaccination.”

    “Recognizing and acknowledging people’s beliefs, however conveying facts and not getting drawn into a discussion or arguments.”

    Although these participants were talking about an imagined measles outbreak, their perspectives have significance for how we respond to other disease threats and how we communicate effectively with diverse populations. As I write this (April 2021), the COVID-19 vaccines have begun to be rolled out in Australia. While the questions in our survey were about measles, the implications for how best to manage the threat of disease, while also communicating in an open, thoughtful and compassionate way, especially with people who might be feeling nervous about vaccinations, is timely.

    Further reading:

    Wiley KE, Leask J, Attwell K, Helps C, Degeling C, Ward P et al. Parenting and the vaccine refusal process: A new explanation of the relationship between lifestyle and vaccination trajectories. Social Science & Medicine. 2020;263. doi: https://doi.org/10.1016/j.socscimed.2020.113259.

    SKAI (Sharing Knowledge About Immunisation) website, National Centre of Immunisation Research and Surveillance, https://www.talkingaboutimmunisation.org.au, Australia.

    The post Dealing with measles outbreaks in areas of high anti-vaccination sentiment appeared first on BMC Series blog.

    in BMC Series blog on April 09, 2021 05:02 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    We Have A Strong Urge To Find Out What Might Have Been — Even When This Leads To Feelings Of Regret

    By guest blogger Anna Greenburgh

    Regret seems to be a fundamental part of the human experience. As James Baldwin wrote, “Though we would like to live without regrets, and sometimes proudly insist that we have none, this is not really possible, if only because we are mortal.” Expressions of regret are easy to find throughout the history of thought, and, as indicated in the Old Testament, intrinsic to regret is a sense of emotional pain: “God regretted making humans on earth; God’s heart was saddened”.

    Given the aversive experience of regret, traditional models of decision-making predict that people should to try to avoid it. But of course, the picture is more complex — we all have experienced the desire to know what might have been, even if it leads to regret. Now a study in Psychological Science, led by Lily FitzGibbon at the University of Reading, finds that the lure of finding out what might have been is surprisingly enticing.

    Across six experiments, the researchers employed the Balloon Analogue Risk Task (BART) in which participants are required to inflate a computer animation of a balloon. The more they inflate the balloon, the greater the participant’s payoff — but each balloon has a randomly assigned “safe limit” above which it pops, and the participant is paid nothing.

    In each trial, participants decided how many times to pump up the balloon and were then shown the trial outcome: whether the balloon popped (“bust” trials), or remained inflated thereby giving participants a reward (“bank” trials). After this outcome was revealed, they had the opportunity to seek “counterfactual” information — that is, feedback about alternative possible outcomes; in this case, how far they could have pumped the balloon safely in the trial and how much they could have won. Importantly, as the balloons’ safe limits varied randomly across trials, this information could not help performance later on in the task. Participants were asked to rate their emotional state, from sad to happy, after learning the outcome of the trial, and indicate whether this emotional state had changed after receiving the counterfactual information.

    The researchers examined how often participants sought counterfactual information, as well as its emotional effects. They focussed their analysis on “bank” trials as these trials were expected to clearly elicit regret: counterfactual information on these trials normally signified a missed opportunity as the participant could usually have inflated the balloon more and therefore earned a higher reward.

    Across all experiments, participants seemed to experience regret in “bank” trials: they felt significantly worse after receiving counterfactual information. Unsurprisingly, the greater the missed opportunity, the worse the participants felt. But even though this information elicited regret, counterfactual curiosity was high: participants requested feedback in 46% of “bank” trials across all the main experiments, and 71% in a replication study.

    Strikingly, participants even spent money to receive counterfactual information: although counterfactual curiosity was higher when information was free, when they had to pay for it, they still requested feedback on 18% of bank trials. Similarly, in experiments where participants needed to exert physical effort to obtain counterfactual information, they requested feedback around half the time. This underlines how difficult it is to resist the motivational pull of learning about missed opportunities.

    The counterfactual curiosity observed in “bank” trials also had detrimental effects on participants’ performance. After receiving such feedback, participants took greater risks on subsequent trials, which had a negative effect on the number of points won, particularly when this behavioural adjustment was large. This highlights a mechanism likely relevant to gambling problems: counterfactual curiosity can exacerbate damaging gambling behaviour.

    While many regrets in life pertain to our own mistakes made in isolation, as social beings we continually fret over interactions with others. Of course, the BART is an abstract paradigm so the study cannot speak to regrets of a more social nature. While this is a question for future research, the strength of counterfactual curiosity exposed in the paper might suggest that many of us have a morbid curiosity to seek out regret in all forms.

    The Lure of Counterfactual Curiosity: People Incur a Cost to Experience Regret

    Post written by Anna Greenburgh for the BPS Research Digest. is a PhD student at University College London in Cognitive Science and Clinical Psychology. Her research investigates social cognition across the paranoia spectrum and in psychosis. She has written for academic journals as well as magazines such as Psyche

    At Research Digest we’re proud to showcase the expertise and writing talent of our community. Click here for more about our guest posts.

    in The British Psychological Society - Research Digest on April 08, 2021 10:14 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Apparent HeLa cell line mixup earns a paper an expression of concern

    A journal has issued an expression of concern for a 2011 paper after recognizing that the researchers may have been using contaminated cell lines.  The article, “Downregulation of NIN/RPN12 binding protein inhibit [sic] the growth of human hepatocellular carcinoma cells,” appeared in Molecular Biology Reports, a Springer Nature title. In it, the authors, from China … Continue reading Apparent HeLa cell line mixup earns a paper an expression of concern

    in Retraction watch on April 08, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Sci-Hub Case: Academics Urge Court To Rule Against ‘Extortionate Practices’

    A piece of News about the signed a statement by urging the Delhi high court to rule against three publishers that have petitioned the court to have access to Sci-Hub and Libgen blocked in India.

    in Open Access Tracking Project: news on April 08, 2021 12:16 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    When People Hold Morally-Based Attitudes, Two-Sided Messages Can Encourage Them To Consider Opposing Viewpoints

    By Emma Young

    Where do you stand on pheasant shooting? Or single-religion schools? Or abortion? However you feel, your attitudes probably have a strong moral basis. This makes them especially resistant to change. And since anyone who holds an opposing view, based on their own moral stance, is unlikely to be easily swayed by your arguments, these kinds of disputes tend to lead to blow-outs within families and workplaces, as well, of course, as online.

    So, anything that can encourage people to be more open to at least thinking about an alternative point of view could be helpful, reasoned Mengran Xu and Richard E. Petty at The Ohio State University, US. And in a new paper in Personality and Social Psychology Bulletin, they reveal a potentially promising method for doing just this.

    In the first pair of studies, the researchers looked at how various messages about gun control and freedom of speech for Nazis went down with a total of 375 US-based participants. The participants first reported on their attitudes to these topics, and the extent to which those attitudes had a moral basis. Then, those who felt that Nazis should not be allowed to speak in US high schools read a statement that concluded with the argument that, in fact, they should be allowed to. There were two variations, however: this conclusion was preceded either with strong arguments in its favour (a one-sided message), or with these arguments plus an acknowledgement that some people might find this “too much” for high school students (a two-sided message). A similar procedure was used with those who’d initially reported support for gun control.

    Next, the participants used a simple scale to indicate how open they were to reconsidering their position. The researchers’ analysis revealed that as the moral basis of a participant’s attitudes increased, their openness to the opposing view decreased. However, for those who’d read the two-sided message, this effect disappeared: that is, morally-based attitudes no longer seemed to make people less open to opposing views.

    A follow-up study found that when a two-sided argument did not really respect the opposing argument, in that it offered only weak counter-arguments — such as claiming that gun ownership is bad because guns make upsetting loud noises — this was no more effective at influencing the participants than single-sided messages.

    A final study attempted to get closer to examining people’s actual behaviour. The participants in this study had all reported being unsupportive of the idea that masks should always be worn outside during the Covid-19 pandemic. The team found that a two-sided message had a bigger impact than a one-sided message on the participants’ reported openness towards more mask-wearing, and on their reported intention to wear masks more often.

    With respect to the findings overall, the researchers write, “the relative benefit of a two-sided message over a one-sided communication is enhanced as the attitude’s moral basis increases.”

    Conflict and misunderstanding between people with different views does seem to have grown worse in recent decades, they add. This work suggests that one way to tackle this — and to advance your or your organisation’s point of view on a topic with a strong moral underpinning — is to clearly respect the reasons for an alternative viewpoint at the same time as advocating your own position. This is hardly a radical argument. But given the often vitriolic debates that develop around morally based attitudes, it’s surely worth bearing in mind.

    Two-Sided Messages Promote Openness for Morally Based Attitudes

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on April 07, 2021 12:39 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Researcher charged with abusing his wife has third paper retracted

    A researcher in Canada whose once-brilliant career in kinesiology went from plaudits from his peers to criminal charges of horrific abuse of his wife has notched his third retraction.  As we reported in 2018, Abdeel Safdar, formerly of McMaster University and Harvard, where he was a postdoc, was the subject of an institutional investigation over … Continue reading Researcher charged with abusing his wife has third paper retracted

    in Retraction watch on April 07, 2021 11:17 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Prof Jean Bousquet’s Sauerkraut Therapy

    For some reason, Christian Drosten is the most famous COVID-19 scientist of the Charité Berlin medical school. Meanwhile, Professors Jean Bousquet and Torsten Zuberbier found and tested the pandemic cure, and it's Brassica oleracea!

    in For Better Science on April 07, 2021 06:13 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Bullying Between “Frenemies” Is Surprisingly Common

    By Emily Reynolds

    We already know that bullying can be one way of climbing the social ladder for teenagers. Research published in 2019, for instance, found that teenagers who combine aggressive behaviour with prosociality see the most social success.

    But who, exactly, are teenagers bullying? According to Robert Faris from the University of California, Davis and colleagues, writing in the American Journal of Sociology, it might not be who you’d expect. Rather than bullying those more distant from them, the team finds, teens often pick on their own friends.

    Data came from a longitudinal study of middle and high school students in grades 6, 7, and 8 (years 7, 8, and 9 in the UK). The team looked specifically at aggression, creating “networks” that reflected who had been aggressive towards whom at the schools. These were based on peer-nominations by the students, who had named up to five schoolmates who had “done something mean” to them in the past three months, as well as five they had been mean to. This didn’t include friendly teasing, focusing instead on acts of genuine aggression.

    Participants also indicated who their five closest friends were. As with aggression, a matrix was created to understand mutual and unrequited friendships, as well as linking friends of friends and measuring whether friendships were sustained or dissolved over the course of the study. 

    The team also looked at several adverse outcomes of being bullied — anxiety, depression, and lack of attachment to school, based on self-reports from the kids. The aggression matrix and friendship matrix were then cross-referenced in order to ascertain whether students were being victimised by friends, friends of friends, or by more distant schoolmates.

    The team found that bullying was more likely to occur within friendships, and between friends of friends, compared to between kids with more distant social ties. This could partly be down to aggression between former friends: aggression was three times more likely to arise within friendships that dissolved during the school year. But it might also indicate a “frenemy” relationship, wherein both friendship and aggression coincide — aggression was four times more likely to occur in friendships that sustained themselves through the study than in more distant relationships (though it’s important to note the difference in rates of aggression between former friends and “frenemies” is not significant). Unsurprisingly, the team also found evidence that being bullied is associated with significant increases in depression and anxiety and decreases in attachment to school.

    What is clear from the results is that victimisation is a common experience for teenagers — and that it can be friends and friends of friends who perpetrate this aggression, whether those ties are mutual or not. While individual traits or social and familial circumstances are key to understanding why teens bully others, the results also suggest that the complex social dynamics of schools can play a significant part too.

    This may be a chance to rethink anti-bullying programmes, the team suggests: as previous research has shown, aggression can have a positive social outcome for bullies, helping them climb the social ladder. Thinking more carefully about how to strengthen existing friendships and highlight their rewards may be one way of reducing the social value accrued through aggression.

    With Friends Like These: Aggression from Amity and Equivalence

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on April 06, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    An author asked for multiple corrections to a paper. PLOS ONE decided to retract it.

    After an author requested a slew of changes to a published paper, journal editors reviewed the study and spotted “additional concerns” that led to its retraction. The study, titled “Pressure regulated basis for gene transcription by delta-cell micro-compliance modeled in silico: Biphenyl, bisphenol and small molecule ligand models of cell contraction-expansion,” was published in PLOS … Continue reading An author asked for multiple corrections to a paper. PLOS ONE decided to retract it.

    in Retraction watch on April 06, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Seven barred from research after plagiarism, duplications in eleven papers

    A retired Nepali professor and six others have been barred from research after plagiarism and duplicated images were found in 11 of their papers. Parashuram Mishra, a retired crystallographer at Tribhuvan University, in Nepal, is the lead author on all the studies. Most of the papers contain image duplications; the same figures were reused across … Continue reading Seven barred from research after plagiarism, duplications in eleven papers

    in Retraction watch on April 05, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: Peer review ‘brutality’; COVID-19 vaccine trial scandal; homeopathy researcher admits ‘unethical behavior’

    Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance. The week at Retraction Watch featured: 25,000: That’s how many retractions are now in the Retraction … Continue reading Weekend reads: Peer review ‘brutality’; COVID-19 vaccine trial scandal; homeopathy researcher admits ‘unethical behavior’

    in Retraction watch on April 03, 2021 02:09 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    25,000: That’s how many retractions are now in the Retraction Watch Database

    We reached two milestones this week at Retraction Watch. Our database — the most comprehensive source for retractions by a wide margin — surpassed 25,000 retractions. And our list of retracted COVID-19 papers, which we’ve maintained for a year, grew past 100 for the first time. When we launched Retraction Watch in 2010, we, along … Continue reading 25,000: That’s how many retractions are now in the Retraction Watch Database

    in Retraction watch on April 02, 2021 12:12 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A longtime whistleblower explains why he’s spent more than a decade trying to get a paper retracted

    Since the report of the MIST Trial was published in Circulation in 2008, I have repeatedly written to the journal to express concern about the paper. Most recently, on February 22, I wrote to the editor-in-chief of Circulation, which is owned by the American Heart Association (AHA), requesting that they retract the 2008 MIST Trial … Continue reading A longtime whistleblower explains why he’s spent more than a decade trying to get a paper retracted

    in Retraction watch on April 01, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    We Have Many More Than Five Senses — Here’s How To Make The Most Of Them

    By Emma Young

    We’re all familiar with the phrase “healthy body, healthy mind”. But this doesn’t just refer to physical fitness and muscle strength: for a healthy mind, we need healthy senses, too. Fortunately, there’s now a wealth of evidence that we can train our many senses, to improve not only how we use our bodies, but how we think and behave, as well as how we feel. Trapped as we are in our own “perceptual bubbles”, it can be hard to appreciate not only that other people sense things differently — but that so can we, if we only put in a little effort.

    But if we’re going to make the most of using and improving our senses to enhance our wellbeing, we have to consider more than sight, hearing, taste, touch and smell. Aristotle’s desperately outdated five sense model may still be popular, but it vastly under-estimates our extraordinary human capacity for sensing.

    Proprioception

    Proprioception — the sensing of the location of our body parts in space — has been relatively ignored, but it’s critical for confidence in using our bodies. If you now shut your eyes, and extend a leg, it’s thanks to this sense that you know exactly where your leg is. To go for a run, then, or work out in the gym, and not fall or injure yourself, you need a good sense of proprioception. Our sedentary lifestyles are a threat to this sense (and the Covid-19 lockdowns certainly did not help). But climbing trees, walking along balance beams, navigating obstacles, crossing stepping stones (which you can simulate at home, using small mats placed on the floor) are all proprioceptively demanding, and so train this sense. According to research led by a team at the University at North Florida, these kinds of exercises not only improve physical coordination but also working memory.

    Yoko Ichino, the ballet mistress at Northern Ballet, based in Leeds, teaches regular proprioception classes, during which students are required to practise complex moves with their eyes shut. “We use our eyes too much,” Ichino says. “We need to use all our other senses as well but because our eyes are open all the time, we never develop them. So I put that into my own training.” She recommends (if it is safe) to move around your home with your eyes closed. This will train not only proprioception but another suite of senses:

    Vestibular senses

    Our vestibular system allows us to sense the direction of gravity (and so which way is up), as well as horizontal and vertical movement (as in a car or a lift) and in three dimensions (as on a rollercoaster). Research shows that a healthy vestibular system is important not only for balance but for our sense of being grounded inside a physical body; in fact, people with vestibular problems are more likely to report out-of-body experiences. They’re also more likely to get lost, because a healthy vestibular system is important for a good sense of direction.

    For all of us, though, the older we get, the duller all our vestibular responses become. (This becomes noticeable at a population level from the not very grand old age of 40). Specific vestibular rehab training exercises have been developed for people diagnosed with definite vestibular problems. But the rest of us will benefit from dynamic movements that require moving the head, like those involved in climbing a tree or practising tai chi, as well as anything that challenges our balance.

    Light-detection

    We know that our eyes are not just for seeing. When melanopsin-expressing cells in the retina are exposed to light, they send signals to the master body-clock, in the brain’s hypothalamus (without causing us to see a thing). Certain variations in the gene for this protein have been linked to an increased risk of Seasonal Affective Disorder, and stimulating these receptors in the morning with suitable levels of light helps to ward off low mood. To improve psychological wellbeing, you don’t need to train these receptors to work better, but you do need to help them to work for you by getting outdoors in the morning and avoiding bright light in the evening.

    Smell

    Smell is not hugely regarded, or developed, in many people in Western cultures. But research with hunter-gatherer groups, such as the Jahai, who live in the tropical rainforest in Malaysia, shows that we have the biological capacity to smell extraordinarily well. Asifa Majid, now at the University of York, and colleagues, found, for example, that the Jahai took on average only two seconds to precisely describe an odour, while Dutch-speakers took an average of 13 seconds to arrive at a much poorer description (describing the scent of a lemon simply as “lemony”, say, rather than using more abstract descriptions).

    To develop your sense of smell, Majid and others advocate consciously smelling different things, often. Professional perfumer Nadjib Achaibou, who is based in London, tells me that his sensitive nose is absolutely trained, not born. The best way to enhance your sense of smell is to use it, and explore it, he argues: “You might say, ‘Oh, I like pepper.’ Why? Why do I like it? What is it adding to your dish? That’s the first step to enhance your sense of smell. If you see a rose, stop and smell it. If you have a friend wearing perfume, smell the perfume and describe it. When you buy a shower gel, a toilet detergent or a perfume, ask questions. Read the marketing materials, but also trust yourself. You might think, yes they say there is rose in that but what I can smell is lemon. But what kind of lemon?”

    Put in the effort to enhance your sense of smell, and you should enjoy all kinds of other benefits. Research shows that a fishy smell improves our critical thinking, for example, while a 2018 study in Germany found that people who are more sensitive to smells enjoyed sex more, and women with a better sense of smell reported more orgasms. “The perception of body odours such as vaginal fluids, sperm and sweat seems to enrich the sexual experience,” by increasing sexual arousal, they wrote.

    Temperature

    We have receptors in our skin that register temperatures within specific ranges. Stimulation of our “warmth-sensors”, in particular, has been linked to feeling less lonely, and also “warmer” towards other people. Some of the most-publicised results in this field have failed to replicate, leading critics to question them. However, a 2019 study in Social Psychology suggested that results might have been mixed because researchers weren’t taking into account the ambient temperature outside, or inside the lab. When they did this in their research — which involved strapping heated backwraps to participants and asking them about their social plans — the team found support for the idea that feeling cold physically is associated with feeling “colder” socially, driving a desire for more contact with other people. Providing heat (via the backwrap, in this case) could eliminate this effect. 

    Inner sensing (interoception)

    About 10% of us are really good at sensing our own heartbeat without feeling for a pulse, 5-10% of us are terrible at it, and the rest fall in between. Research shows that people who are better at so-called “cardiac interoception” experience emotions more intensely, enjoy more nuanced emotions, and are better at recognising other people’s emotions, which is a critical first step in empathy. In contrast, people who don’t experience emotions in the typical way (a condition called “alexithymia”, which is thought to affect up to 10% of people, to some extent) suffer from impairments in inner sensing. Could training inner sensing help, then, to improve our emotional wellbeing? It’s still early days for this research, but work led by Sarah Garfinkel, now at University College London, suggests that it can. This is a training technique that you could try at home:

    1. Sit somewhere quiet and set a timer (on your phone or home digital assistant, perhaps) for one minute, but don’t start it yet.
    2. Now start the timer, and try to count your heart beats.
    3. Do this again, but feel for your pulse this time, to take an accurate measure (this is the feedback that should help your interoceptive awareness to improve).
    4. Repeat all the steps.

    If you can’t sense your heart beating, try exercising first, because this makes it easier.

    Emma Young, (@EmmaELYoung) a staff writer on the BPS Research Digest, is the author of ‘Super Senses: The Science of Your 32 Senses and How to Use Them’ (John Murray, 2021).

    in The British Psychological Society - Research Digest on April 01, 2021 09:13 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Steve Jackson and the Moumen Troll

    "I take issues of research integrity very seriously and shall of course review the concerns posted on PubPeer to establish whether there are any issues that need to be addressed." Stephen P Jackson.

    in For Better Science on April 01, 2021 06:59 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Overinterpreting Computational Models of Decision-Making

    Bell (1985)



    Can a set of equations predict and quantify complex emotions resulting from financial decisions made in an uncertain environment? An influential paper by David E. Bell considered the implications of disappointment, a psychological reaction caused by comparing an actual outcome to a more optimistic expected outcome, as in playing the lottery. Equations for regret, disappointment, elation, and satisfaction have been incorporated into economic models of financial decision-making (e.g., variants of prospect theory).

    Financial choices comprise one critical aspect of decision-making in our daily lives. There are so many choices we make every day, from the proverbial option paralysis in the cereal aisle...

    ...to decisions about who to date, where to go on vacation, whether one should take a new job, change fields, start a business, move to a new city, get married, get divorced, have children (or not).

    And who to trust. Futuristic scenario below...


    Decision to Trust

    I just met someone at a pivotal meeting of the Dryden Commission. We chatted beforehand and discovered we had some common ground. Plus he's brilliant, charming and witty.

    “Are you looking for an ally?” he asked. 


    Neil, Laura and Stanley in Season 3 of Humans

     

    Should I trust this person and go out to dinner with him? Time to ask my assistant Stanley, the orange-eyed (servile) Synthetic, an anthropomorphic robot with superior strength and computational abilities.


    Laura: “Stanley, was Dr. Sommer lying to me just then, about Basswood?”


    Stanley, the orange-eyed Synth: “Based on initial analysis of 16 distinct physiological factors, I would rate the likelihood of deceit or misrepresentation in Dr. Sommer's response to your inquiry at... EIGHTY-FIVE PERCENT.”

    The world would be easier to navigate if we could base our decisions on an abundance of data and well-tuned weighting functions accessible to the human brain. Right? Like a computational model of trust and reputation or a model of how people choose to allocate effort in social interactions. Right?

    I'm out of my element here, so this will limit my understanding of these models. Which brings me to a more familiar topic: meta-commentary on interpretation (and extrapolation).

    Computational Decision-Making


    My motivation for writing this post was annoyance. And despair. A study on probabilistic decision-making under uncertain and volatile conditions came to the conclusion that people with anxiety and depression will benefit from focusing on past successes, instead of failures. Which kinda goes without saying. The paper in eLife was far more measured and sophisticated, but the press release said:

    The more chaotic things get, the harder it is for people with clinical anxiety and/or depression to make sound decisions and to learn from their mistakes. On a positive note, overly anxious and depressed people’s judgment can improve if they focus on what they get right, instead of what they get wrong...

    ...researchers tested the probabilistic decision-making skills of more than 300 adults, including people with major depressive disorder and generalized anxiety disorder. In probabilistic decision making, people, often without being aware of it, use the positive or negative results of their previous actions to inform their current decisions.


    The unaware shall become aware. Further advice:

    “When everything keeps changing rapidly, and you get a bad outcome from a decision you make, you might fixate on what you did wrong, which is often the case with clinically anxious or depressed people...”

    ...individualized treatments, such as cognitive behavior therapy, could improve both decision-making skills and confidence by focusing on past successes, instead of failures...

     

    The final statement on individualized CBT could very well be true, but it has nothing to do with the outcome of the study (Gagne et al., 2021), wherein participants chose between two shapes associated with differential probabilities of receiving electric shock (Exp. 1), or financial gain or loss (Exp. 2).
     


    With that out of the way, I will say the experiments and the computational modeling approach are impressive. The theme is probabilistic decision-making under uncertainty, with the added bonus of volatility in the underlying causal structure (e.g., the square is suddenly associated a higher probability of shocks). People with anxiety disorders and depression are generally intolerant of uncertainty. Learning the stimulus-outcome contingencies and then rapidly adapting to change was predictably impaired.

    Does this general finding differ for learning under reward vs. punishment? For anxiety vs. depression? In the past, depression was associated with altered learning under reward, while anxiety was associated with altered learning under punishment (including in the authors' own work). For reasons that were not entirely clear to me, the authors chose to classify symptoms using a bifactor model designed to capture “internalizing psychopathology” common to both anxiety and depression vs. symptoms that are unique to each disorder [ but see Fried (2021) ]1

    Overall, high scores on the common internalizing factor were associated with impaired adjustments to learning rate during the volatile condition, and this held whether the outcomes were shocks, financial gains, or financial losses. Meanwhile, high scores on anxiety-unique or depression-unique symptoms did not show this relationship. This was determined by computational modeling of task performance, using a hierarchical Bayesian framework to identify the model that best described the participants' behavior:

    We fitted participants’ choice behavior using alternate versions of simple reinforcement learning models. We focused on models that were parameterized in a sufficiently flexible manner to capture differences in behavior between experimental conditions (block type: volatile versus stable; task version: reward gain versus aversive) and differences in learning from better or worse than expected outcomes. We used a hierarchical Bayesian approach to estimate distributions over model parameters at an individual- and population-level with the latter capturing variation as a function of general, anxiety-specific, and depression-specific internalizing symptoms. 


    We've been living in a very uncertain world for more than a year now, often in a state of loneliness and isolation. Some of us have experienced loss after loss, deteriorating mental health, lack of motivation, lack of purpose, and difficulty making decisions. My snappish response to the press release concerns whether we can prescribe individualized therapies based on the differences between the yellow arrows on the left (“resilient people”) compared to the right (“internalizing people” — i.e., the anxious and depressed), given that the participants may not even realize they're learning anything.



     Footnote

    1 I will leave it to Dr. Eiko Fried (2021) to explain whether we should accept (or reject) this bifactor model of “shared symptoms” vs. “unshared symptoms”.



    References

    Bell DE. (1985) Disappointment in decision making under uncertainty. Operations research 33(1):1-27.

    Gagne C, Zika O, Dayan P, Bishop SJ. (2020). Impaired adaptation of learning to contingency volatility in internalizing psychopathology. Elife 9:e61387.

    Further Reading

    Fried EI. (2020). Lack of theory building and testing impedes progress in the factor and network literature. Psychological Inquiry 31(4):271-88.

    Guest O, Martin AE. (2021) How computational modeling can force theory building in psychological science. Perspect Psychol Sci. Jan 22:1745691620970585.

    van Rooij I, Baggio G. (2021). Theory before the test: How to build high-verisimilitude explanatory theories in psychological science. Perspect Psychol Sci. Jan 6:1745691620970604.

    in The Neurocritic on April 01, 2021 04:25 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Goodbye Discover

    The end of this Neuroskeptic era

    in Discovery magazine - Neuroskeptic on March 31, 2021 08:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Beam us up! Elsevier pulls 26 Covid-19 papers by researcher with a penchant for Star Trek

    An Elsevier journal has retracted more than two dozen Covid-19 papers by a researcher in Malta with a fondness for Star Trek after determining that the articles did not meet its standards for publication.   The move comes several months after we reported that Hampton Gaddy, a student at the University of Oxford, had raised questions … Continue reading Beam us up! Elsevier pulls 26 Covid-19 papers by researcher with a penchant for Star Trek

    in Retraction watch on March 31, 2021 07:35 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Companies’ Succession Announcements Can Inadvertently Make Work Life Harder For Incoming Female CEOs

    By Emma Young

    When an organisation appoints a new male CEO, the announcement will typically highlight his past achievements and the competencies that make him ideal for the job. What if the new CEO is a woman? The widely expected, gender-neutral thing to do is, of course, to make precisely the same type of announcement. However, according to the team behind a new paper in the Journal of Applied Psychology, this can make work life more difficult for her, and shorten the time that she spends in that role. Priyanka Dwiwedi at Texas A&M University and her colleagues base this striking conclusion on an extensive analysis of data on women who have been appointed to top positions in the US, as well as in-depth interviews with female executives.

    Why might an announcement that focuses on the winning candidate’s past successes and competences affect a female vs a male leader differently? Perhaps, the team theorised, it might showcase “that the new female leader does not adhere to prescriptive norms about how women ‘should be’. Such violations could elicit social disapproval, backlash and evaluation penalties for women.” In other words, new female leaders who are praised for their competence, confidence and ambition (stereotypically “male”-type leadership traits) may then experience “stereotype threat” — and feel judged for not conforming to the stereotypically female profile of being nurturing, socially sensitive and group-focused. The psychological burden of this could then make it harder for the woman to remain in her job. Lab-based work has supported these ideas, and there has been some anecdotal evidence in its favour, too. But this is the first attempt at major, real-world investigation.

    To start with, the team considered 91 female CEO successions among companies included in major US stock market indices between 1995 and 2012. When a host of other factors that could plausibly have influenced any CEO’s longevity were taken into account, the researchers noted a link between endorsements that focused on success and competence and a shorter time as CEO. (These endorsements came from the companies’ own websites, press releases and annual reports, for example.) Two factors did emerge as being linked to a longer time in the job despite success-focused endorsements, however: the female CEO being an internal rather than an external appointment, and the presence of a relatively high number of female executives in the company. Analysis of a matched sample of male CEOs did not reveal a link between endorsement content and time in the role.

    The team then ran semi-structured interviews with 31 female executives (not all CEOs, though all in senior management positions), each with an average of 25 years’ experience at a variety of US companies. The findings from this qualitative portion of the study supported the team’s idea that such women are subject to stereotype threat. They found that “female executives were chronically aware of gender stereotyping and continued biases in the male-dominated context of upper echelons both during their transition into, and throughout their time in, their executive roles.” These women also reported all kinds of negative personal responses, including anxiety about being perceived as incompetent, or having insufficient legitimacy as a leader, as well as feeling exhausted with dealing with the stereotypes. For example, one commented: “When a woman is demanding, they think she’s a bitch… But if a man is demanding, they see him as tough and good.” Another talked about how, as she moved up the ranks, she felt much more hostility and exclusion — so much so, that she eventually quit her job.

    The interview-based segment of the study supports the idea that women in top leadership roles suffer from unhelpful stereotypes about women. When it comes to the team’s main conclusion, however — that the content of the endorsement of a new female CEO can affect her longevity in the job — it’s important to note that the link is correlational. (Though the team’s failure to find such an association for the male CEOs is certainly worth stressing.) Though the researchers did control for a lot of other potential influences on their results, this study could not definitively demonstrate that the content of the appointee endorsement in and of itself causes greater or lesser stereotype-related pressures on a female CEO. But if their suggestion that it does is correct, what are the practical lessons?

    It does pose a dilemma for companies, the researchers accept. “While they would do well to support their new female leaders in as many ways as they can, they also need to constantly recognise that any such support could backfire because of implicit gender biases and stereotyping”. The team did find that the presence of more female executives mitigated this, though. So: “Perhaps the only long-term solution to this dilemma is for organisations to obtain a critical mass of female leaders in order to buffer them against potentially negative stakeholder perceptions.” They add that though this study focused on women, it’s likely that CEOs from other minorities in the executive level in businesses will experience something very similar. The finding also contributes to the growing body of work finding that well-intentioned attempts to counteract gender stereotypes can backfire.

    “Burnt by the spotlight”: How leadership endorsements impact the longevity of female leaders.

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 31, 2021 11:17 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Author, Author! Or perhaps we should say Fake Author, Fake Author!

    Researchers in Iran have lost their 2019 paper on nanofluids after the journal learned that their list of authors included an engineer at the University of Texas who had nothing to do with the work.  The article, “Numerical study on free convection in a U-shaped CuO/water nanofluid-filled cavity with different aspect ratios using double-MRT lattice … Continue reading Author, Author! Or perhaps we should say Fake Author, Fake Author!

    in Retraction watch on March 31, 2021 10:01 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The relationship between self-harm and bullying among adolescents

    Self-harm is often the result of a complex interplay of genetic- biological, psychological and environmental factors. Bullying is one aspect that can increase the risk of self-harm. However not all adolescents who experience bullying harm themselves.

    Bullies are also at risk

    It is not only being the victim of bullying that is related to self-harm, but also being a bully aggressor. In fact, in our study we found that adolescents who are both bullies and are bullied by their peers (the bully-victims), were the group most vulnerable to self-harm. They were six times more likely to self-harm than adolescents who weren’t bullies or bullied.

    It is possible that the “bully-victims” have the broadest range of adjustment problems, presenting difficulties common to both bullies and victims. An earlier study found that many of the bully-victims first have a history of being bullied, and then begin to bully peers later in adolescence. Thus, they suffer from both emotional and behavioral problems.

    Why is there a relationship between self-harm and bullying behavior?

    Both bullying victimization and self-harm are associated with emotional problems such as anxiety and depression. In our study we found that emotional problems and parental conflicts were important factors for the association between the bullied adolescents and self-harm, and between the bully-victims and self-harm.

    Our study showed that school behavioral problems also accounted for some of the relationship between self-harm and the bullies, and self-harm and bully-victims. This confirms the idea that the act of bullying is part of a broader concept of conduct behavioral problems with aggressive and delinquent behaviors, school failure, and drop out.

    Parental support and school well-being

    School well-being (including support from teacher) was protective of self-harm for the bullies and the bully-victims

    While we see that those who are bullied and/or bullies may be more likely to engage in self-harm, we know that not everyone who is bullied or bullies self-harms. We therefore investigated what might protect against self-harm amongst the bullies, bullied and bully-victims. We found that although parental support had a protective effect on self-harm among boys and girls in general, it was especially important for those who are bullied.

    This shows that the willingness to talk to and seek help from parents during a difficult time like bullying, may protect against self-harm among adolescents.

    We were surprised to also find that school well-being (including support from teacher) was protective of self-harm for the bullied and the bully-victims. Our result appears to be the first to investigate the buffering effect of school well-being on bullying behavior and self-harm among adolescents. This is an important result for the prevention of self-harm. Schools, parents, and health care professionals should be aware of the importance of school well-being for adolescents who are being bullied, in terms of identifying those at risk of self-harm.

    How frequent is self-harm?

    Our study showed that fifteen percent of participating adolescents reported engaging in self-harm during the last year. This is consistent with earlier studies finding the 12-month prevalence rate to be between 10 to 19% around the world.

    Who participated in the study?

    The data we used in this study was the “Ungdata” that is a cross-sectional, large, national survey, designed for adolescents. A total of 14 093 adolescents aged 12 to 19 years, from different parts of Norway, participated in the study; this was 87% of those invited to participate.

    Our data was collected at one point in time and therefore we do not know if the bullying occurred prior to self-harm. However, previous longitudinal studies have shown that bullying increased the risk of self-harm and not the other way around.

    How we measured self-harm and bulling

    We measured self-harm by asking if the adolescents had tried to harm them self in the past 12 months. Bullied was measured by asking adolescent if they had been teased, threatened, or frozen out by other young people in school, free time, online, or on their mobile phones. Bullying other peers was measured by asking if they had taken part in teasing, threatening or freezing out out other young people at school, free time, online or by mobile phone. We created a new variable to measure those both being bullied and who bully others (the bully-victims), by combining the variables “Bullied” and “Bullying other peers”.

    Conclusion

    High levels of parental support and school well-being may buffer the harmful relationship between bullying behavior and self-harm

    There is a strong link between bullying and self-harm. Interventions to address bullying may reduce self-harm. Our findings also suggest that high levels of parental support and school well-being may buffer the harmful relationship between bullying behavior and self-harm. Addressing these factors may be important in reducing the risk of self-harm among those experiencing bullying.

    The post The relationship between self-harm and bullying among adolescents appeared first on BMC Series blog.

    in BMC Series blog on March 31, 2021 05:53 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Supporting community needs with an enhanced code policy on PLOS Computational Biology

    PLOS Computational Biology has adopted an enhanced code sharing policy for all papers submitted from 30 March 2021. The change, announced in this editorial, comes in response to community needs and support for initiatives which drive Open Science practices. 

    Community drivers

    Code sharing is not new to many of our authors. Research shows that 41% of papers in PLOS Computational Biology already share code voluntarily (Boudreau et al. 2021), demonstrating the community’s willingness to make their work both rapidly available and methodologically transparent. This creates a strong foundation for implementing a stronger policy and also works towards establishing code sharing as a normal research behaviour. 

    PLOS Computational Biology defines “open” as more than the availability of a research article. Our authors, editors, and readers see data and code sharing as tools to boost reproducibility and transparency. Sharing code alongside data will allow others to check and reproduce work and ultimately drive new discoveries. Requiring authors to share their code (unless there are good reasons not to) works towards making the research published in PLOS Computational Biology as robust as possible for the benefit of the whole research community.

    In developing the policy we sought views from researchers in the computational community on the barriers they face when sharing code (Harney et al. 2021). We then tested our policy text with researchers to help inform its design and content. This has allowed us to respond to researcher needs, for example by allowing exemptions to the policy for those with legitimate legal or ethical constraints on sharing. In this way, PLOS Computational Biology continues to be shaped by policies the community has helped define, and Open Science values that reflect the way they want to communicate research. 

    Open Science at PLOS

    Open Science is one of our founding principles at PLOS. Enhancing the code sharing policy at one of our journals in response to community needs is just one step we are making to increase the adoption of Open Science practices. We aim to empower researchers to share their good practices because we believe communities are best placed to define their own needs. The computational biology community is shaping the future of research communication by leading the way in practices that make science more open and equitable for all. However, we do offer guidance and help to those who need it, for example, by detailing best practice for code sharing alongside our policy text. No matter how researchers choose to make their work Open, PLOS Computational Biology provides options to support researcher needs and the platform to influence broader change.

    Collaboration with the community and responding to their needs is part of our approach to Open Science, regardless of how that community is defined. We will be closely monitoring the reaction to the code sharing policy at PLOS Computational Biology and exploring what an enhanced policy could mean for other communities that we serve. 

    Written by Lauren Cadwallader, Open Research Manager

    References

    Boudreau M, Poline J-B, Bellec P, Stikov N (2021) On the open-source landscape of PLOS Computational Biology. PLOS Comput Biol 17(2): e1008725. https://doi.org/10.1371/journal.pcbi.1008725

    Harney J, Hrynaszkiewicz I, Cadwallader L, (2021) Code Sharing Survey 2020 – PLOS Computational Biology. figshare. Dataset. https://doi.org/10.6084/m9.figshare.13366025 

    The post Supporting community needs with an enhanced code policy on PLOS Computational Biology appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 31, 2021 04:59 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    People With Depression Show Hints Of Distorted Thinking In The Language They Use On Social Media

    By Emily Reynolds

    A key facet of cognitive behavioural therapy is challenging “cognitive distortions”, inaccurate thought patterns that often affect those with depression. Such distortions could include jumping to conclusions, catastrophising, black and white thinking, or self-blame — and can cause sincere distress to those experiencing them.

    But how do we track cognitive distortion in those with depression outside of self-reporting? A new study, published in Nature Human Behaviour, explores cognitive distortions online, finding that those with depression have higher levels of distortion in the language they use on social media.

    Krishna Bathina from Indiana University Bloomington and colleagues looked at the language of over 6 million tweets from 7,349 Twitter accounts, some who had previously tweeted that they had a diagnosis of depression and some who were randomly selected. The researchers were specifically interested in how often these tweets contained 241 phrases which they considered to be the “building blocks” of various cognitive distortions associated with depression. For instance, the phrase “everyone believes” was taken to be part of the “mindreading” distortion, in which people think that they know what others are thinking. Other cognitive distortions include catastrophising, overgeneralisation, and discounting positive experiences.

    Results showed that those who had tweeted about a diagnosis of depression used significantly more of the cognitive distortion phrases than those in the random condition. This was true for nearly all of the 12 cognitive distortion types, except for fortune-telling (predicting a negative outcome), mind reading and catastrophising, which were not significantly different between the two groups. The distortion types that were most prevalent in the depression cohort compared to the control group were personalising (taking things personally) and emotional reasoning (mistaking a feeling for a fact).

    This could mean that depression could be tracked via language — particularly online, where people may be more open about what they’re thinking and feeling.  The team also suggests  that the findings could have an impact on the way cognitive behavioural therapy is actually delivered: how certain types of language are used can reflect specific cognitive distortions and therefore give better insight for targeted, relevant treatment.      

    But although all data was anonymised, the team acknowledges that scraping data on mental health from users of social platforms throws up tricky ethical issues; some users have expressed serious discomfort at their personal information being used without consent. The team suggests that in the future “automated interventions” for depression could target people using such language on social media — but when so many people use social platforms to express themselves and find a space and community to support them, such interventions may be unwanted.

    There are also questions about the self-disclosure of those in the depression condition. Firstly, there was no way for researchers to verify the diagnoses; secondly, there may have been many in the random condition with diagnoses they just never thought or wanted to share. Although jokey tweets were filtered out (e.g. “That Game of Thrones episode has given me a diagnosis of depression”), the language of mental illness is often co-opted or used in an exaggerated fashion — after political defeats, for example. Looking at the subtlety of online communications may also provide different and interesting answers.

    Individuals with depression express more distorted thinking on social media

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 30, 2021 01:50 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The communal misconduct by Zhenhe Suo in Olso

    "the Committee believes that when carelessness or scientific dishonesty can be found in so many articles with so many different authors in question, there must be a lack of training and / or lack of control over data handling. The committee therefore believes that it is qualified probability that there has been an institutional system error when it comes to training. The committee believes that good routines for training are a line responsibility and can not only be attributed to group or project manager."

    in For Better Science on March 30, 2021 11:26 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Erection of a Placebo

    When yesterday's placebo is tomorrow's treatment

    in Discovery magazine - Neuroskeptic on March 30, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Frequent Workplace Interruptions Are Annoying — But May Also Help You Feel That You Belong

    By Emma Young

    Workplace disturbances during the Covid-19 pandemic aren’t quite what they used to be. Now you’re more likely to be interrupted by a cat jumping on your keyboard or a partner trying to make a cup of tea while you’re in a meeting — but if you can cast your mind back to what it was like to work in an office, perhaps you can recall how annoying it was to be disturbed by colleagues dropping by with questions or comments. These “workplace intrusions” used to be common in offices, and no doubt will be again. There’s certainly plenty of evidence that they interfere with our ability to complete tasks, and that we can find them stressful. However, no one’s really considered potential benefits, note Harshad Puranik at the University of Illinois and colleagues. In their new paper in the Journal of Advanced Psychology, the team reports that though there is a dark side to these interruptions, there’s a bright side, too. 

    Before the pandemic, Puranik and his colleagues studied 111 people with an average age of about 35 who worked full time. At around noon every work day for three weeks, these participants used simple scales to report any work intrusions (such as being interrupted by someone who wanted to ask questions or to assign them a new task); their stocks of willpower for self-regulation (reporting on their level of agreement with “I feel drained right now”, for example); their sense of how connected they felt with other people at work (a measure of “belongingness”); and also their stress levels. In the late afternoon each day, they completed a second brief survey, which assessed their level of job satisfaction.

    Consistent with earlier work, the team found that more work intrusions were associated with less energy for self-regulation — which helps you to immediately return to a task after being disturbed, rather than do a bit of online shopping, for example. What’s more, statistically speaking, this depleted self-regulation explained a further link between more work intrusions and lower job satisfaction. So far, so dark-side.

    The team also found, however, that work intrusions are not all bad. In fact, more work intrusions were associated with higher ratings for belongingness. High belongingness ratings were also independently linked with greater job satisfaction; though the association between more disturbances and higher ratings for belongingness was not strong enough to produce a net increase in job satisfaction, it did seem to mitigate the negative effects of being disturbed. “To our knowledge,” they write, “this is the first evidence of a positive relationship between work intrusions and job satisfaction, implying this relationship might be more nuanced than previously thought.” They add that “to our knowledge, this is the first study to show that in a workplace setting, experiencing belongingness can somewhat counteract the effects of self-regulatory resource depletion.”

    Though some people will continue to work from home after lockdown restrictions lift, many of us are now preparing to return to offices. Pre-Covid-19, workplace intrusions were described as a “common and consistent” feature of work life for people in many different kinds of jobs. Given that the consensus to date has been that they carry only downsides, these new findings of a belongingness boost are worth noting. “We believe this is a crucial insight,” the team writes.

    Excuse me, do you have a minute? An exploration of the dark- and bright-side effects of daily work interruptions for employee well-being.

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 29, 2021 11:26 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 29 March 1400 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please note the change in time for today's meeting: 1400 UTC instead of 1300 UTC

    Please join us at the next regular Open NeuroFedora team meeting on Monday 29 March at 1400UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1400 today'
    

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on March 29, 2021 09:26 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Cuteness And Self-Compassion: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    Like humans, octopuses have both an active and quiet stage of sleep, reports Rodrigo Pérez Ortega at Science. Researchers found that for 30-40 minutes of sleep the creatures are fairly still with pale skin, but for about 40 seconds their skin turns darker and they move their eyes and body. In humans, dreaming happens in the active, REM stage, but scientists still don’t know whether the octopuses also dream during their active sleep.


    During the pandemic we haven’t only missed out on socialising — we’ve also been deprived of novel experiences. And a lack of novelty can be detrimental to our wellbeing and even our cognitive functioning, writes Richard A Friedman at The Guardian.


    A growing body of evidence suggests that sustaining head injuries in sports raises the risk of players developing neurodegenerative diseases. Now a longitudinal study following a cohort of Americans has found that even mild head injuries can increase the risk of dementia, reports Sara Harrison at Wired. The team also found that the increase in risk was greater for women than men and for White than Black people, but more work is needed to understand these differences.


    What’s going on in our brains when we look at a cute little baby or kitten? At Science Focus, Thomas Ling explores the neuroscience of cuteness.


    Some people who have recovered from Covid-19 report a loss of smell — or experience previously nice smells as unpleasant. And that can have devastating social consequences, writes Alyson Krueger at The New York Times: many sufferers report that they are no longer able to be intimate with loved ones or eat meals with friends and family.


    Scientists have looked at the personality and cognitive abilities of “psychonauts”, people who experiment with psychedelic drugs and document their experiences. This group showed high levels of sensation-seeking and risk-taking, report the researchers, Barbara Sahakian and George Savulich, at The Conversation. But they didn’t have any deficits in learning and memory (unlike a group of “club drug” users who were seeking help for addiction), suggesting that they were avoiding harmful patterns of drug use.


    Being kind to ourselves is vital for our wellbeing and personal growth — and yet a lot of us are not very good at it. At Psyche, Christina Chwyl examines the research around self-compassion, and explores how we might become better friends to ourselves.

    Compiled by Matthew Warren (@MattBWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on March 26, 2021 01:17 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Die with a smile: antidepressants against COVID-19

    "Fluvoxamine could certainly be something you wanna put in the tool chest. 'Cause it looks as if it has the promise to reduce the likelihood of severe illness." - Francis Collins, NIH Director.

    in For Better Science on March 26, 2021 06:27 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Building Neuromatch Academy

    Neuromatch Academy 2020 is a three-week online summer school in computational neuroscience that took place in July of 2020. We created interactive notebooks covering all aspects of computational neuroscience – from signals and models of spikes to machine learning and behaviour. We hired close to 200 TAs to teach this material to 1,700 students from all over the world in an inverted classroom setting. Over 87% of students that started the class finished it, a number unheard of in the MOOC space.

    By now, we have a preprint out, and NMA has been covered in Nature and the Lancet. So how did we do it? Building something that goes from 0 to several hundred volunteers working tirelessly is a huge endeavor, and I’m sure a lot of the readers of this blog coming from an academic background have trouble even imagining how you can get something like that off the ground. Richard Gao wrote a great post on how it felt to be flying in this anarchist hacker spaceship as it was being built. I wanted to share some thoughts with you that I hope will be helpful if you ever want to create an experience at this scale.

    A little background: my stint into teaching

    After my time at Facebook, I wanted to focus on the academic route. I became interested in teaching and taught an introductory CS class at a local college, building my materials from scratch. I thought, rather naively, that since I knew CS, this would be fairly straightforward. But preparing course materials materials for beginners is anything but easy. One issue is the curse of expertise: if you know a subject pretty well it becomes harder to explain it to a novice. For example, here’s my explanation of globals in Python:

    Python has two scopes: module and function. You have read access to module variables inside of a function. However, if you want to write to a module variable it, you need to use the global keyword. Unless the variable is a reference type… If a variable is a dict or a list, for example, you can change its contents inside a function even if it’s a module variable. That’s because Python has pass-by-ref semantics for complex types. You see, default pass-by-ref semantics for complex types were originally introduced in fourth generation languages to get rid of the difficulty of granular pass-by-value or pass-by-ref semantics in languages like C. […many rambling minutes later…]. In any case, don’t use globals if you can avoid them.

    Although this explanation is technically correct, it doesn’t fit with novice students’ current development, and giving that explanation is more likely to make them feel inadequate than enlightened. It takes a tremendous amount of attention to detail, radical empathy and craftsmanship to understand the adult learner’s motivations – both intrinsic and extrinsic – and to create materials that engage them at the right level. And creating this stuff takes a huge amount of time! By the time I finished the class, I was ready to throw away the material I created. At the same time, it felt quixotic to take half a year off to craft the perfect materials for a class that was going to be taken by ~15 students a semester. The lack of good materials meant that I had to spend much of my time in class on adjusting materials dynamically – at the end of the day, I was completely exhausted. Surely, there must be a better way!

    In response to the global pandemic, Chris Piech and Mehran Sahami decided to offer the first half of the introductory Stanford CS class to students at large, an experiment called Code in Place. Over 800 section leaders signed up to give 6 weeks of instruction on core computational concepts in Python. Here was a completely different approach to teaching a subject: very high quality materials crafted over years at Stanford – including a robot-control environment in Python called Karel – given in a hybrid setting, with the section leader (aka TA) holding Q&A sessions and solving test problems with students in an inverted classroom setting. And it worked! Students were engaged and stuck around, and they worked on self-directed projects to continue their learning. What I found is that by not having to focus so much on materials prep and the fallout from having so-so materials, I had more time to connect with the students on an emotional, social level – understanding what they’re stuck on, where they’re coming from, what their motivations are, etc.

    Karel is an environment where students can learn basis of programming, including variables, if statements, for loops and decomposition, by controlling a cute robot on a grid called Karel

    A few weeks into Code In Place, I came across Konrad Kording’s Twitter post looking for NMA volunteers. By that time, they had a core team – Megan in the Exec role, Sean as finance guru, Brad as ops, Konrad as dreamer, Gunnar and Paul on curriculum. They had just gotten started, they needed people with technical skills. I took on the technical chair, things escalated, and within a few weeks I was CTO and along with many other, was doing NMA full time. My time in industry proved to be an asset – working with short deadlines, focusing on shipping, and managing relationships with a large interdisciplinary team. Pretty soon, I was helping coordinate dozens of volunteers, a growth experience that I didn’t know I was ready for. So if you’re wondering whether industry is right for you, if you want to build big things and manage big collaborations in the future, I think it’s a great place to get that experience.

    Deciding what to build

    When you first get started on a large project like this, you get overwhelmed with ideas about what it is that you’re trying to build. One framework for orienting your thoughts are the Heilmeier questions:

    • What are you trying to do? Articulate your objectives using absolutely no jargon.
    • How is it done today, and what are the limits of current practice?
    • What is new in your approach and why do you think it will be successful?
    • Who cares? If you are successful, what difference will it make?
    • What are the risks?
    • How much will it cost?
    • How long will it take?
    • What are the mid-term and final “exams” to check for success?

    If you can answer these questions, you know what you’re building, and you can get a team to congregate around those ideas. This isn’t very different, conceptually, from writing a grant. For us, it was really the fundraising document that formalized a lot of our early brainstorming The answer to these questions could be written simply as:

    • We’re building an online, 3-week intensive summer school in computational neuroscience. It will be cheap (100$ or less to attend) and accessible to all.
    • Traditional summer schools offer a great experience and the chance to create life-long scientific networks, but they are elitist and expensive. MOOCs are cheap and accessible to all but a vanishingly small percentage of people actually complete them because they don’t foster belonging.
    • We create high-quality videos and interactive notebooks and we deliver them in pods of students matched to TAs. We offer the close-knit experience of traditional summer schools with the accessibility of MOOCs in a new hybrid model. People are bored and alone because of COVID so we have a unique opportunity to show that this online inverted classroom model can work.
    • We’ll bring cheap high-quality education to thousands of people and will set the standard for open source educational materials for years to come. We can bring the model to many other areas of science.
    • Execution risk and legal risk
    • 1500$ for each TA, and we’re aiming for 200 TAs
    • We’re launching July 13th
    • We’ll track attendance, completion metrics, students surveys, website statistics, etc. to have a 360 view of our impact. Our core metric is completion of the class and we aim for 80% completion.

    Within each committee, you can take Heilmeier’s questions and consider their relevance to your committee’s duties. The curriculum committee’s goal might be to create high-quality materials for the class, while the technical committee might aim to deliver 99% uptime during the class. The point is to try to have clarity on your goals so that when crunch time happens you can prioritize.

    Running on enthusiasm

    I was really enthusiastic about taking what I learned from Code in Place and bringing that to computational neuroscience. The core team had been thinking about a new kind of summer school for years before NMA got started and they were ready to tap into their network to jump start the effort. In the early days, being enthusiastic about the project and having a great story to tell are instrumental in finding volunteers. We’re democratizing education! We’re bringing grad-level education to underserved communities! We’re doing something no one’s ever done!

    I volunteer at a soup kitchen, and one thing they emphasized during our training is that a volunteer’s time is a gift. You have to have a pipeline that translates a volunteer’s enthusiasm into action, otherwise you’re refusing to accept that gift, and that’s frustrating for the volunteer. At first, we would bring new people into our slack workspace and expected them to find their way. We realized pretty soon that having people in the slack is just the first step – you need to have a plan to retain volunteers. This became especially true as the slack workspace shot up to hundreds, sometimes thousands of messages a day. Empathize with those poor people just joining in being bombarded with way too much information!

    For the technical committee, what worked well to engage new volunteers is to do a task triage. There’s a backlog of stuff that needs to be done – update the website, crank out some forms, clean up the github page, prepare the forum, etc. In a synchronous meeting, you go through the backlog and people volunteer to take on different tasks. That means your committee head needs to have tasks partly groomed beforehand. They pull up the Kanban board during the meeting and fill it in – we used quire, but Asana or even just a Google doc could work equally well. Everybody gets a chance to pitch in, and confusion about the scope of each task can be resolved during the meeting. It’s also a big team-building exercise: you get to see other people pitch in. We applied this method to build several efforts, including bootstrapping the technical team, the video editing team, the waxers team (see later), community managers, etc.

    If you prefer the asynchronous route, Github tasks with a good-for-first-timers tag can be useful. Regardless of the software you use, you want to embody openness: you want tasks to be done to be radically transparent and for volunteers to feel maximal agency. This is a common model for open source communities that can be fruitfully adapted to education.

    Running like clockwork

    Assigning tasks of different sizes and priority as a backlog grooming session is great to engage new volunteers, but you will also need to do time-sensitive tasks. For time-sensitive things, you need a Gantt chart with a detailed day-by-day breakdown of tasks to be done with somebody accountable for each task (the single-threaded owner). I was taught the arcane art of Gantt charts by my colleagues Keith and Frances at Facebook, alums of Fitbit and the Apple hardware team. Let me tell you something, if you want to get stuff done on time, somebody that knows how to ship hardware will make it happen – when you build consumer products, you have to ship them on time, no matter what.

    The arcane art of the Gantt chart

    For an online summer school, that means having deadlines for student admissions, TA applications, first drafts of content, second drafts of content, reviews, editing, posting on github, etc. for every single day of content. The deadlines should be visible to everybody, and the people responsible for them should be accountable. Elizabeth DuPre pointed me to this talk by the creator of Elm – in open source projects, the right culture doesn’t just happen by accident, culture happens because norms are defined and reinforced. “Deadlines matter” is a norm that is different than what most people are used to in the context of giving instruction. If you’re giving your own class, you can keep editing your slides up to 2 minutes before the class. You can’t do that at NMA – your slides will be reviewed, your video recording will be edited, your video will be captioned, etc. – if you hand in your slides late, everybody down the pipeline suffers.

    If you want deadlines to matter you have to will them into existence by making them a norm inside of an organization and reinforcing that by making them visible to everybody. Keeping track of these deadlines eventually led to a daily cadence of standups, which I ran – ironic because I always hated standups in industry. Konrad once called me “a little German” for my insistence on deadlines, which is very funny since I’m almost absurdly disorganized in my personal life. Being inside of a big org is different. It’s an embodiment of a deeper principle of contractualism – we define and agree on what we owe to each other and bind ourselves to that, and that’s what defines morality within our community. The upside of that is the contract is multilateral. When we couldn’t get the ideal matching method between TAs and students to work on time, but we had a “good enough” version that was working, I could say to others that we have to ship the good enough version today, because we agreed to it; it gave me cover, even though it was an unpopular decision. That kind of clarity in planning and expectations avoids a lot of friction later on.

    You need good tools to work productively with each other

    In The Mythical Man-Month, Fred Brooks claims that “adding [more programmers] to a late software project makes it later”. When you have dozens of contributors to an open-source education curriculum, how do you avoid that? You need good tools. One of the first things that the technical team worked on was a smoke test for notebooks. We were worried that one of the notebooks that we use for teaching might break. Notebooks are designed for exploration and can be pretty brittle – a simple cell execution order inconsistency can break a notebook. If multiple people are editing and pushing notebooks, it’s almost guaranteed that a notebook will break at some point. If a notebook is broken, an individual TA might not have enough context to diagnose the problem, and we would have to broadcast to our 200 TAs a fix, which would be stressful for TA and student alike.

    So we started with a really simple smoke test, written by Marco. When you push notebooks to Github, the continuous integration kicks in, runs your notebook from scratch, and check whether it runs. From these humble beginnings, Michael Waskom built an intricate continuous integration (CI) pipeline to check that the code runs, that the style is consistent, and to generate versions of the notebooks for students and TAs, etc. This is the thing that allowed the editors to push better versions of the notebooks on a regular basis, and proved invaluable during a time crunch.

    Similarly, we used an online video editing tool (WeVideo) rather than an offline one because it allowed multiple people to contribute to video editing. That allowed Tara to oversee a dozen video editors and reassign tasks when necessary.

    You need good organization to work productively with each

    The easiest way to get a lot of people to contribute lots of content is to have them work in silos. Each of our instruction days was more or less independent of the others, so it made sense to have different people work on each independently. The downside to this approach is that it makes for a jumbled experience for the student. Novice learners, who don’t yet have proficiency in a subject, can easily be thrown off by a change in notation, nomenclature, or tone. Day leaders created one pagers far ahead of the content so we could diagnose missing prerequisites, incompatible learning objectives and big bumps in difficulty throughout the days. But that wasn’t enough.

    The biggest contributors to a smooth experience for the students were pre-pods and waxers. When I was giving an introductory CS class, I could see the looks of confusion on student’s faces. I could adjust the content in realtime (a high-wire experience, do not recommend). Ideally I would have had another test classroom run through the content in real time to give me detailed notes so I could improve the content before I gave the class in front of the actual students. That’s exactly what we did with pre-pods: we hired a dozen TAs to test the content 3 weeks before the real class (pods before the real ones; hence pre-pods). They gave 360 feedback at the micro level (e.g. typos) and macro level (e.g. changes in difficulty from day to day) which could then be taken into account by the content creators. It’s not adversarial like peer review can sometimes feel – everybody is on the same side. It’s similar to UI/UX testing, where you put your product in front of naive users and watch them destroy it from the other side of the one-way mirror so you can make a better product. This kind of design thinking – ship early, ship often, iterate – can be applied to all aspects of open source education.

    The other thing we realized is that writing good notebooks is really hard: it requires the confluence understanding the content (domain knowledge in comp neuro), programming, and radical empathy towards the learner (caring about androgogy). You also need an understanding of the house style, context across days, and knowing the quirks of the Github CI. We needed a dedicated team to polish content, the waxers (we call them editors outside of NMA, but internally the name stuck). Waxers had to have the most stressful job of all, and I would often see messages exchanged on slack at 3-4 in the morning about content that needed to be ready for the next day. But it worked! The notebooks were a highlight of the NMA experience, and they will keep giving value to the community for a long time.

    Ella, Michael, Tara, Madineh and I presented our pipeline for the content in this talk:

    Making decisions under uncertainty

    How do you make decisions swiftly and effectively in a big org? We had hundreds of volunteers; 134 authors on the curriculum paper alone. 134 very smart people cannot all be of the same opinion at the same time in the face of uncertainty. The first step to make decisions effectively is to separate decision making and execution. It’s very easy to leave a lot of decisions to be implicit until it’s crunch time. It happened a couple of times that I made a medium-scale decision or encouraged other people to make medium-scale decisions during a daily standup meeting. Big mistake. A lot of people affected by the decisions weren’t in the room; they felt unheard and now burdened to go to a daily meeting to stay in the loop. Don’t do that – take bigger decisions in separate planning meetings. Every software methodology has some notion of a planning meeting, whether waterfall or scrum or whatever – by the end of that meeting you should be clear about what to do for the next week or month.

    One thing that can sap a lot of energy at these meetings is having circular arguments: revisiting the same question in subsequent meetings even though you thought you had come to an agreement. Sometimes, one person thinks a decision was made, while another thinks things were still under discussion. Write down decisions in meeting notes, share them with everyone.

    Some orgs in the for-profit sector are hierarchical, so that disagreements get resolved top-down. That doesn’t really work in the non-profit sector, where people are there on a volunteer basis. In non-profit and in open source communities, many orgs thus use consensus-based decision making. I think pure consensus-based decision making is very difficult to get right. I’m a member of a not-for-profit makerspace that’s built on anarchist principles. We use consensus-based decision-making, and it gets rowdy: flame wars on our message board are not infrequent, whether it’s about resource allocation and who gets the space when and whose responsibility it is to take the trash out. Pure consensus-based decision making can have an insidious effect that people that are not in agreement with the majority are left isolated as “difficult” and “not a team player”. That can breed resentment, which leads to personality clashes, which leads to dysfunction: decision making grinds to a halt.

    There’s an alternative to pure consensus-based decision making: disagree and commit. You replace the norm that consensus must be reached with an alternative norm:

    1. everybody’s opinion should be understood
    2. decisions are made based on majority or supermajority
    3. it’s ok to disagree with a decision but we all bound ourselves to the decision

    To make sure that you understand somebody’s opinion – especially an opinion you don’t agree with – you can use the Steel Man technique. You restate the strongest version of their argument and engage with that. Oftentimes that will resolve whatever argument you had in the first place, or prepare you to build a hybrid solution. If you don’t come to a resolution, at least everyone will understand that they have been heard and understood. Decisions can take place in realtime in a synchronous meeting, or through a polling app in Slack (if a poll, put a deadline on it so it doesn’t drag on; in either case, quorum should be reached). If somebody disagreed with the decision, the norm says that that’s good and healthy, as long as they commit to the decision like everyone, so it doesn’t breed resentment in the long term.

    Be the change you wish to see

    My manager at Facebook often talked about positive risk: if you shoot for the moon, sometimes you can actually overshoot and accomplish more than you intended. That always sounded like nonsense to me – I’m a firm believer in Murphy’s law – but I had a chance to witness things going unexpectedly right at NMA. We planned many things, and delivered on what we planned, but perhaps our greatest success was fairly unexpected: pods worked. When we put students together with TAs, and had them interact closely for hours at a time in peer programming, they created bonds. Those bonds tapped into the students’ intrinsic motivation to be part of a community, and they felt belonging. When the going got tough, and they felt overwhelmed, they tapped into that support network to keep them going one more day. That’s how 87% of students managed to finish the class. One TA described the students tearing up on the last day and staying up till late in the morning to say goodbyes. Yes, materials are important, but in androgogy you also need to answer students’ emotional needs. That’s the biggest differentiator between NMA and a MOOC.

    Building that experience required a ton of time from many dedicated and passionate people, people that wanted to make a difference. Sometimes emotions ran high and I butted heads – I can be a difficult person and that’s something I need to work on. Sometimes we had legitimate disagreements, but oftentimes we were just stressed and sleep-deprived, and we were able to apologize and move on. We were able to deliver a great experience, produce a new model for online learning, and we created bonds with each other that will follow us for a long time. Should you decide to embark on a expansive adventure like that? In the words of Edwin Land (stolen from Jack Gallant’s email signature):

    Don’t undertake a project unless it is manifestly important and nearly impossible

    Edwin Land

    I’d like to thank the organizers, Megan, Brad, Gunnar, Konrad, Paul, Kate, Carsen, Xaq, Aina, Athena, Eric, John, Alex, Yiota, Emma and Beth; the waxers Michael, Madineh, Ella, Matt, Richard, Jesse, Byron, Saeed, Ameet, Spiros; everybody that contributed to technical, Titipat, Jeff, Marco, Arash, Adam, Natalie, Guga, and Tara; and everyone who contributed, whether a few hours or weeks at a time.

    Do you want to join a motley crew of education disruptors? Volunteer for NMA 2021 – calls are broadcast on Twitter.

    in xcorr.net on March 25, 2021 08:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Students Who Want To Cut Down On Their Drinking Often Feel Forced To Compromise For Social Connection

    By Emily Reynolds

    Drinking culture is a huge part of university, with Freshers’ Week events often revolving near-exclusively around getting drunk. A 2018 survey from the National Union of Students found that 76% of respondents feel an expectation for students to “drink to get drunk”; 79% agreed that “drinking and getting drunk” is a key part of university culture.

    This isn’t for everyone, however: a quick search of student forums will show many young people, pre-university, anxious about a drinking culture they don’t want to participate in. Now a new study in the British Journal of Health Psychology, authored by Dominic Conroy from the University of East London and team, has taken a closer look at students’ decisions to reduce their alcohol consumption — and what prevents them from doing so.

    Participants were ten undergraduate students from the UK, all of whom had undergone a transition in their drinking habits, decreasing or stopping altogether. The students had different levels and patterns of drinking: some were light drinkers or completely teetotal, while others were more moderate drinkers.

    The students took part in semi-structured interviews with one of the paper’s authors, responding to open-ended questions related to their drinking. Two “dilemmas” emerged from the participants’ responses: wanting to drink less but being concerned about social ramifications, and wanting to cut down but worrying about social confidence and missing out on fun.

    Resolving the first dilemma — wanting to cut down but being concerned about social ramifications — was considered “important yet difficult to achieve” for participants: not drinking often came with an in-built assumption that the person was “uninterested in socialising” altogether. One participant described involving himself in drinking games during Freshers’ Week purely to build social connections: “I think it would have been more difficult to make friends if I was avoiding drinking completely,” he said.

    For multiple participants, there was a desire to recognise the connections between early-term socialising and the potential for friendship in the years ahead; drinking during the early stages of university was often part of that process. This balancing act wasn’t always easy: one participant, Kelly, said that being open about her drinking preferences led to people “making [her] feel weird” and that her relationship with her flatmates was “quite difficult” because of it.

    The second dilemma — missing out on fun and social confidence brought on by alcohol — was also resonant in participants’ accounts of university life. Alice told the team that “as a sober person, I’m so much more in control… sometimes I think I would enjoy myself more if I was pissed and less inhibited”, pinpointing alcohol as a source of fun, good memories, and uninhibitedness.

    Other participants, more moderate drinkers who had cut down, experienced a similar yearning for the fun of alcohol — but faced a further dilemma of trying to drink less than they had before or simply getting drunk as usual. Here, students felt trapped in a binary of either drinking a lot or drinking nothing.

    The study had a small sample size and was not quantitative, so it’s hard to get an idea of how many students might be feeling this way or are being forced to compromise for the sake of their social lives. However, the insights from the conversations suggest a more complex picture of drinking at university than is sometimes understood.

    There is a clear (and perhaps unsurprising) thread running through the findings that peer pressure or social norms are pushing students into drinking, or drinking more than one wants or intends to. One notable testimony suggested that there is little give-and-take on the part of students who do drink, with those who drink very little being forced to compromise instead. Those who don’t drink also miss out on social connections and fear they are missing out on fun.

    Paying attention to these two broad dilemmas, and the individual issues they encapsulate, may be a way of understanding the issues students face when it comes to drinking, drug-taking and peer pressure during university. The team notes that the students interviewed often (perhaps inadvertently) demonstrated a “sober curious” attitude to drinking, an approach that has taken off in the media over the last few years. Promoting such an approach, which rejects strict dichotomies, may be one way of helping students manage their drinking in a way that feels right for them, shifting norms around alcohol at the same time.

    ‘Maturing Out’ as dilemmatic: Transitions towards relatively light drinking practices among UK University students

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 25, 2021 02:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Here’s What We Listen For When Deciding Whether A Speaker Is Lying Or Uncertain

    By Emma Young

    How do you know whether to trust what someone is telling you? There’s ongoing debate about which cues are reliable, and how good we are at recognising deception. But now a new paper in Nature Communications reveals that we reliably take a particular pattern of speech pitch, loudness and duration as indicating either that the person lying or that they’re unsure of what they’re saying — and that we do it without even being aware of what we’re tuning into.

    In an initial study, Louise Goupil at Sorbonne University, France, and her team manipulated the pitch, loudness and duration of a series of spoken pseudo-words (which sounded like they could be real words in French, but were not). Twenty native French speakers then listened to these words and rated the speaker’s honesty and also their certainty (honesty and certainty were investigated in two separate trials with the same participants, held one week apart). The participants were also asked how confident they were in their judgements.

    The team’s analysis revealed that a single “prosodic signature” — that is, the same pattern of volume, pitch, and speed — was associated with perceptions of both honesty and certainty. Loudness (especially at the beginning of the word), a lower pitch towards the end of the word, a less variable pitch overall, and a faster pace of speaking were all associated with more honesty/confidence. The opposite patterns were associated with less of either. The team also found that the participants were more confident when making judgements about the speaker’s certainty.

    Of course, uncertainty and deception are not the same thing. A subsequent study involving two groups of 20 participants revealed that context — in this case, background information about what the speakers were purportedly doing when they uttered the pseudo-words — allows us to use the same vocal signature to make judgements about either honesty or confidence, depending on the situation. However, while there was widespread agreement when it came to judging confidence, the group was more split when judging honesty. While the data showed that the prosodic signature informed all their judgements, some of the participants decided that the speaker was faking an honest voice. This suggests that we rely heavily on sensory evidence in inferring a speaker’s level of confidence, but our judgements about whether they’re being truthful or not are more complex.

    Further questioning of this same group of participants revealed that though they did indeed rely on the three-factor prosodic signature in making their judgements, they were to a large extent unaware of just what they were tuning into. The team also studied additional participants whose native languages included English, Spanish, German, Marathi, Japanese and Mandarin Chinese — and found the same results. “Overall, these results demonstrate the language independence of a core prosodic signature that underlies both judgements of certainty and honesty,” the team writes.

    The researchers suggest that the prosodic signature is tied to signs of cognitive effort: someone who has to make more of an effort with what they’re saying — either because they aren’t very sure or because they’re lying — will take longer to say it, for example, and use less emphasis. That could explain why it is apparently not culture-dependent, but fundamental to all people.

    A follow-up study by the team did reveal that when participants heard words spoken with the prosodic signature of unreliability/dishonesty, these words “popped out” against effortless speech, grabbing the participants’ attention. So perhaps, starting in young childhood, we learn to spot these signs of cognitive effort, and learn to interpret them as indicating uncertainty or dishonesty. Alternatively, it might be part of a more ancient, innate system, shaped through evolutionary pressures to know whom to trust.

    The team also found some gender differences in explicit judgements about whether people were lying/uncertain. For example, women were more likely to interpret certain/honest signatures as being faked. However, the data can’t reveal whether these differences were down to gender per se, or other related factors, such as anxiety or empathy.

    Clearly, there are questions still to be answered. But it’s fascinating and, as the researchers write: “Our results add to the growing body of evidence suggesting that, contrary to decades of research arguing that humans are highly gullible, dedicated mechanisms actually allow us to detect unreliability in our social partners effectively.”

    There are some immediate practical implications, too. We evolved, of course, to interact face to face — and we can only use this prosodic signature when someone is talking to us, not if they are giving us information via a keyboard and screen. Whatever the benefits of this increasingly common way of interacting, the silencing of this particular method for spotting liars and hustlers is clearly a cost.

    Listeners’ perceptions of the certainty and honesty of a speaker are associated with a common prosodic signature

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 24, 2021 12:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    An interview with protocols.io: CEO Lenny Teytelman on partnering with PLOS

    Our ongoing partnership with protocols.io led to a new and exciting PLOS ONE article type, Lab Protocols, which offers a new avenue to share research in-line with the principles of Open Science. This two-part article type gives authors the best of both platforms: protocols.io hosts the step-by-step methodology details while PLOS ONE publishes a companion article that contextualizes the study and orchestrates peer review of the material. The end result is transparent and reproducible research that can help accelerate scientific discovery.

    Read our interview with protocols.io CEO and co-founder Dr. Lenny Teytelman where he discusses what triggered the concept for the company, how researchers can benefit from this collaboration and how his team settled on such an adorable racoon logo.

    For those who are unfamiliar, what is protocols.io? 

    It is a platform for developing and sharing reproducible methods. It serves both researchers needing secure and private collaboration and those aiming to share their knowledge publicly.

    What was the inspiration behind protocols.io? Did you have any particular experiences or issues as a researcher that led to you believing protocol sharing was an under-supported element of the scientific community? 

    It was a year and a half of my postdoc that went into correcting just a single step of the microscopy methods that I was using. Instead of a microliter of a chemical, the step needed five, and instead of a fifteen-minute incubation with it, it needed an hour. The problem is that this isn’t a new technique but is a correction of a previously-published method. That means I didn’t get any credit for that year and a half, and everyone using this method was either getting misleading results or had to spend one or two years rediscovering what I know—rediscovering what I’d love to share with them, but didn’t have a good way of doing.

    This is the experience that led to my obsession with creating a central place to facilitate the sharing of such knowledge, within research groups and companies and broadly with the world.

    As authors of PLOS ONE began to share methods, we realized that all researchers and disciplines had a need for a dedicated tool for proper method sharing… We now warmly welcome methods from all fields and disciplines, including psychology, social sciences, clinical, chemistry, and so on.

    – Lenny Teytelman, CEO and co-founder of protocols.io

    Protocols.io launched in 2014 — how were the first few years? How were you able to make yourself known and show that your product is worthwhile and secure for researchers? 

    Oh, you just touched on a painful topic. I actually gave a talk at a conference in 2018, called The harsh lessons from 4 years of community building at protocols.io. The first years were surprisingly rough. As an academic with zero entrepreneurial experience, I naively expected that once we build the platform, I’d tell my scientist friends that it exists, and through word of mouth, it would go viral. Turns out that it is not how new initiatives work. There is a lot of dedicated work needed to build the trust and visibility and simply let researchers know that you exist.

    It took the support of publishers, societies, funders, and Open Science and reproducibility champions to climb out of obscurity. Speaking of that support, the 2017 partnership between PLOS and protocols.io was critical in helping the research community learn about protocols.io.

    How has the platform evolved since then? What elements have been consistent and what have been some major changes? 

    Our vision and mission have not really changed since 2014, but thanks to the constant feedback of the research community and a brilliant CTO and co-founder Alexei Stoliartchouk listening to that input, the product has grown from a rudimentary website to a powerful tool with amazing functionality to support the sharing of the method details and to help in the daily work of the researcher. For a fun comparison of where we were at launch in February 2014, take a look at our Kickstarter video.

    One of the early changes was in scope, a year after we launched. We initially thought that this is a platform for experimental wetlab scientists, but soon after launch, requests started to come to expand it to support computational/bioinformatics workflows. And then in 2017, there was another expansion in scope, catalyzed by that same 2017 partnership with PLOS. As authors of PLOS ONE began to share methods, we realized that all researchers and disciplines needed a dedicated tool for proper method sharing. This actually led us to change our welcome page from “Repository for Life Science Methods” to “Repository for Research Methods.” We now warmly welcome methods from all fields and disciplines, including psychology, social sciences, clinical, chemistry, and so on.

    How important is the new PLOS ONE Lab Protocol article type to the journey and mission of protocols.io? 

    Soon after our launch, when I realized that things don’t just “go viral”, I began to think a lot about incentives. It occurred to me some time in 2015 that it would be amazing if researchers had a way of turning their protocols into peer reviewed papers. I was looking at the F1000Research model and wondering if we could add peer review to protocols.io. The problem was that our scope was so broad, we would need thousands of academic editors to be able to support the peer review — essentially we’d have to build PLOS ONE. Then, in 2017 when we started working with PLOS, I realized, “We don’t have to build PLOS ONE! It already exists; we just need to partner!”

    And while this partnership is a big deal for protocols.io, I am particularly excited about it because of what it can do for reproducibility and Open Science. As I said when we announced the launch, “We’re thrilled to extend our partnership with PLOS by launching this new modular article type. This will provide authors with all of the benefits of rigorous peer review, plus a dynamic and interactive platform to present their protocols, with support for videos and precise details that are important for adopting and building upon the published methods.”

    What type of research does not need to be as fast as possible? Do malaria or pediatric cancer patients somehow have the luxury of time? Can our planet afford delays in climate research? Open and rapid sharing as we see today for COVID-19 must be the norm, not an exception.

    – Lenny Teytelman, on the importance of Open Science

    PLOS and protocols.io have similar mission statements and nearby offices: how did this collaboration with PLOS start? 

    Many people assume that our collaboration with PLOS is a consequence of me being co-advised as a graduate student by Professor Michael Eisen, co-founder of PLOS. That actually is not the case. It is true that as soon as I realized in 2012 that something like protocols.io needs to be built, I called Professor Eisen, described the idea, and said, “PLOS should build such a platform.” But he replied, “PLOS is a publisher, not a software developer. You need to build it.”

    So in fact, it is the Chief Science Officer of PLOS, Dr. Veronique Kiermer, who is the key early lead on the PLOS/protocols.io connection. Dr. Kiermer was the founding editor of Nature Methods and has a passion for reproducibility and a deep appreciation for the essential role that protocols play in the research cycle. We met at an Open Access mixer in San Francisco when she had just moved to PLOS. As I showed her what we had built, she asked countless questions of “can it do this and that” and I showed her the “yes” answers right on my phone. She got excited and said, “This is exactly what I always dreamed of. We should do something together!”

    What has the COVID-19 pandemic taught you about the importance of Open Science and supporting collaborative research opportunities? 

    I’ve been stunned by the extent of rapid sharing and collaborative spirit in the research world, united against the COVID-19 pandemic. It is how you ideally imagine science working. It is how science should work. We’ve seen a remarkable level of rapid method sharing in the SARS-CoV-2 group. But it’s not just the methods; researchers are sharing data, preprints, code in precisely the way that accelerates and amplifies everyone’s efforts around the globe.

    I am simultaneously inspired by what I see and frustrated that this isn’t yet the norm. I’ve been watching the world and publishers declare in 2015 that rapid sharing and immediate Open Access are essential for Ebola research. Then the same in 2016 for Zika. Then 2019 for all research related to the opioid crisis. In 2020 and 2021 it’s COVID-19. It’s frustrating that we make exceptions for pandemics and crises and then go back to the traditional stunted way of sharing. 

    What type of research does not need to be as fast as possible? Do malaria or pediatric cancer patients somehow have the luxury of time? Can our planet afford delays in climate research? Open and rapid sharing as we see today for COVID-19 must be the norm, not an exception.

    What’s next for protocols.io? (How can you continue to help scientists adopt Open Science activities?) 

    The beautiful part of our growth (we have over 9,000 public protocols now and have been roughly doubling every year since our launch), is that with increased sharing and researchers on protocols.io we also have received more feedback and requests than ever. As more requests and suggestions come in, our appetite for improving and enhancing the platform only increases. It’s kind of like with research itself – each answer leads to more questions and more thirst for experiments. 

    We are also genuinely excited about the PLOS ONE Lab Protocols and we look forward to the first papers being published and to the future developments in this partnership. We’re just getting started.

    Bonus Question: How did you decide on the cute raccoon logo? 

    Too many reasons to list here! Read this Twitter thread to find out!

    Thank you to Lenny for his time and thoughtful answers. Be sure to visit protocols.io to start browsing through study methodologies or read more about Lab Protocols.

    The post An interview with protocols.io: CEO Lenny Teytelman on partnering with PLOS appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 23, 2021 04:04 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Taking Lecture Notes On A Laptop Might Not Be That Bad After All

    By Emma Young

    “The pen is mightier than the keyboard”… in other words, it’s better to take lecture notes with a pen and paper rather than a laptop. That was the hugely influential conclusion of a paper published in 2014, by Pam Mueller and Daniel Oppenheimer. The work was picked up by media around the world, and has received extensive academic attention; it’s been cited more than 1,100 times and, the authors of a new paper, also in Psychological Science, point out, it often features in discussions among educators about whether or not to ban laptops from classrooms. However, when Heather Urry at Tufts University, US, and her colleagues ran a direct replication of that original study, their findings told a different story. And it’s one that the team’s additional mini meta-analysis of other directly comparable replications supported: when later quizzed on the contents of a talk, participants who’d taken notes with a pen and paper did no better than those who’d used a laptop.

    In the new replication, as in the original study, the 142 participants were all university students (this time at Tufts, rather than Princeton), who took notes while watching one of five roughly 15-minute-long TED talks. As before, there was then a roughly 30-minute delay during which they completed distractor tasks. After this, they completed a quiz on the facts and concepts presented in the talk.  

    When the team analysed the data, they found that, as in the original study, participants who’d used a laptop to take notes recorded more words overall, and more verbatim phrases from the talk. Also as before, both groups performed equally well when it came to recalling the factual content of the talks. However, while Mueller and Oppenheimer found an advantage for longhand note-taking on the recall of concepts presented in the talks, Urry and her colleagues did not.

    Despite the team’s efforts to replicate the original study as closely as possible, there were some differences between the two, however — including some that the researchers acknowledge could have affected the results. For example, more of the participants in the new study reported typically using taking class notes by hand. Also, in the original research, participants completed the study in a classroom, mostly in groups of two, and watched the talks on a screen at the front of the room. In this new research, about 80 of the participants completed the study outside of a class, and they viewed the talk on a laptop provided by the team. Many of these participants “noted that session were subject to distractions and errors,” the authors comment. This might have influenced the results.

    Also, this is of course only one replication. So the team identified a total of eight studies (including theirs and two from the original paper) that they felt were sufficiently similar to be directly compared. This mini meta-analysis produced also failed to find an advantage for longhand note-taking on conceptual recall. 

    Urry and her colleagues do accept that there were some limitations to all of these studies, including theirs. For example, TED talks are brief, and not hugely similar to typical university lectures; there are no pauses for extra note-taking or questions, for example. “Future studies should use approaches that better represent real-world settings,” the team recommends. Future studies should also consider new note-taking strategies (such as the use of styluses to write notes on a paper-like screen) as well as individual participants’ note-taking preferences. Also, there was only a 30-minute delay between watching the talk and being quizzed on it; this is not typical of real world university learning, either, so suggests caution in extrapolating from these results to likely impacts on actual students.

    Still, given the influence of the original 2014 study, it’s important to note this failure to replicate, and the researchers’ cautionary conclusion: “Until future research determines whether and when note-taking media influence academic performance, we conclude that students and professors who are concerned about detrimental effects of computer note-taking on encoding information to be learned in lectures may not need to ditch the laptop just yet.”

    Don’t Ditch the Laptop Just Yet: A Direct Replication of Mueller and Oppenheimer’s (2014) Study 1 Plus Mini Meta-Analyses Across Similar Studies

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 23, 2021 02:38 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2-Minute Neuroscience: Motor Neurons

    Motor neurons are neurons that carry information from the brain or spinal cord to regulate activity in muscles or glands. In this video, I will discuss upper and lower motor neurons. I’ll cover their functions and discuss the syndromes that result from damage to motor neurons: upper motor neuron syndrome and lower motor neuron syndrome.

    in Neuroscientifically Challenged on March 23, 2021 10:23 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Didier Raoult fraud: “Je ne regrette rien”

    One year on: more fake data, financial fraud and illegal and falsified clinical trials by the chloroquine guru Didier Raoult.

    in For Better Science on March 23, 2021 08:34 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Disability, access, and the virtual conference

    After my first Zoom meeting of the pandemic, I found myself lying on the bathroom floor with my noise-cancelling headphones on, on the verge of a full-blown meltdown. As an autistic person, I’ve always been hypersensitive to noise and to visual stimuli—but I hadn’t realized that a Zoom meeting with my colleagues could cause sensory overload. The number of images on the computer screen, the amount of movement in those tiny thumb-nail images, and the speed with which the images had moved, flashed, and changed—not to mention the obtrusive noise of malfunctioning microphones and noisy, socially-distanced households—had been enough to make me physically sick. Although many people tend to assume that an online event is automatically more accessible than an in-person event (after all, people can attend without leaving their homes), this isn’t always the case. Even when online events are more accessible, one of the lessons the pandemic has reminded me of is that the act of creating disability access can fundamentally change the nature of the thing that is accessed.

    “Creating access for people with disabilities sometimes means fundamentally changing the nature of the thing that is made accessible”

    In writing my new book, Shakespeare and Disability Studies, I was interested in exploring disability access as art and in exploring disability access as a complicated (and complicating) multifaceted phenomenon. Disability access is rarely just a question of “Can someone with X impairment use Y?” but rather more often a question of “If we modify Y so that someone with X impairment can use it, how does it change the meaning, the experience, and the effect of Y?” Creating access for people with disabilities sometimes means fundamentally changing the nature of the thing that is made accessible, whether the thing made accessible is a Shakespeare play (“the play’s the thing”) or a Zoom meeting. When we change the nature of the thing made accessible, we don’t just create access and inclusion for people with disabilities—we often create a new kind of experience altogether. I continue to be delighted and inspired by the innovative works, events, and objects we create (whether knowingly or inadvertently) when we create access for people with disabilities. Sometimes those new works, events, and objects bear a resemblance to the inaccessible version from which they grew. At other times, they do not.

    Perhaps the most complex possibility, and the one most fundamentally counter-intuitive to a culture in which able-bodied and neurotypical (non-autistic) thinking are prioritized, is when creating accessibility not only fundamentally changes the nature of the thing made accessible but also causes attendees with a disability to lose something valuable. This seems counter-intuitive because stereotypical ways of thinking about disability have (falsely) taught us that disability experience does not include pleasure and that eliminating barriers to access is always an inherent and uncomplicated good. Neither of these things is true. For example, this year I’ll lead seminars at two Shakespeare conferences (the Shakespeare Association of America and the World Shakespeare Congress). Because of the pandemic, both conferences, for the first time ever, will be held completely online. However, these virtual events are more complicated for me than they originally seem, and their accessibility represents both gain and loss.

    “I continue to be delighted and inspired by the innovative works, events, and objects we create (whether knowingly or inadvertently) when we create access for people with disabilities.”

    In some ways, both conferences will be more accessible for me in an online format. Traveling can be difficult for some autistic people (keeping in mind the effect of Zoom on my body and mind, imagine the effect of a crowded airport). I usually travel annually to the Shakespeare Association of America—but with much ado. I can’t fly alone (nor hope to navigate unfamiliar city streets or crowded hotels by myself) and so always travel with a support person. My support person is usually another Shakespearean, a kind volunteer who, after weathering the sensory travails of the airport and public transportation, will spend the conference with me. I am not independent. But also, I am never alone. Everything in my life is shared. There is a certain beauty in that. After a busy day of seminars and panels, I will lie on the hotel room bed in the fetal position, far too worn out from the crowds and the noise to do anything else, and my support person and I will gossip together with great pleasure about the intellectual goings-on of the conference.

    Disability, in our culture, is stereotyped as loss. Accessibility, in our culture, is deemed as always good, always a gain. It is also often misunderstood as simple and one-dimensional, as easy to understand and to explain. This year, for the first time ever, I will attend the Shakespeare Association of America without a support person. I will be independent. It is important, however, to think about the enormous cost of making the conference fully accessible to me. This act of access will fundamentally change the nature of what the conference is. The only way to make the conference fully accessible to me is for its social component, its travel component, its in-person component, to be completely removed. This represents a loss for all of the people at the conference and, ironically, also a loss for me. Because after my seminar at the 2021 Shakespeare Association of America, I won’t return to my hotel room to gossip about the intellectual goings-on of the conference with my support person. Rather, I will shut down my computer and reflect silently on the events of the conference… alone… for the first time in my life.

    Featured image by Jon Tyson on Unsplash

    The post Disability, access, and the virtual conference appeared first on OUPblog.

    in OUPblog - Psychology and Neuroscience on March 22, 2021 12:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “Criaremos na Unicamp um escritório permanente de integridade em pesquisa para proteger a sociedade e o pesquisador”, afirma Mario Saad

    Mario "Fakenews" Saad is entering a run-off election to become rector of his Brazilian university. The man responsible for massive research fraud and 18 retractions plays the victim of a "Cancel Culture". Saad also announces to create an "Office for Research Integrity", to legalise misconduct and to punish the whistleblowers.

    in For Better Science on March 19, 2021 10:28 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Female Choice by Meike Stoverock: book review

    I review here a new German book about "The Beginning and the the End of the Male Civilization". And then I discuss a related research paper.

    in For Better Science on March 18, 2021 10:26 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How to Exploit Fitness Landscape Properties of Timetabling Problem: A New Operator for Quantum Evolutionary Algorithm

    This week on Journal Club session Mohammad Hassan Tayarani Najaran will talk about a paper "How to Exploit Fitness Landscape Properties of Timetabling Problem: A New Operator for Quantum Evolutionary Algorithm".


    The fitness landscape of the timetabling problems is analyzed in this paper to provide some insight into the properties of the problem. The analyses suggest that the good solutions are clustered in the search space and there is a correlation between the fitness of a local optimum and its distance to the best solution. Inspired by these findings, a new operator for Quantum Evolutionary Algorithms is proposed which, during the search process, collects information about the fitness landscape and tried to capture the backbone structure of the landscape. The knowledge it has collected is used to guide the search process towards a better region in the search space. The proposed algorithm consists of two phases. The first phase uses a tabu mechanism to collect information about the fitness landscape. In the second phase, the collected data are processed to guide the algorithm towards better regions in the search space. The algorithm clusters the good solutions it has found in its previous search process. Then when the population is converged and trapped in a local optimum, it is divided into sub-populations and each sub-population is designated to a cluster. The information in the database is then used to reinitialize the q-individuals, so they represent better regions in the search space. This way the population maintains diversity and by capturing the fitness landscape structure, the algorithm is guided towards better regions in the search space. The algorithm is compared with some state-of-the-art algorithms from PATAT competition conferences and experimental results are presented.


    Papers:

    Date: 2021/03/17
    Time: 14:00
    Location: online

    in UH Biocomputation group on March 17, 2021 04:33 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Open Response to “Economic impact of UKRI open access policy” report

    On February 17, 2021, FTI Consulting released the report: “Economic assessment of the impact of the new Open Access policy developed by UK Research and Innovation” (full title) prepared for the Publisher’s Association (PA) of the UK.  While we are not setting out to provide an extensive review and analysis of this report, we do want to generally refute the assertion that OA via the UKRI policy is economically damaging, and we’ll provide some references that support this position.

    Open Access and Open Research are an opportunity

    The FTI/PA report asserts that the impact of the new UKRI policy could lead to a loss in revenue to “UK-based journals” and have a broader negative impact on UK competitiveness, economic growth and position as a “global research hub.” This assessment appears to use impact on traditional publication models as the basis for this conclusion. A problem here, however, is that it is considering the scholarly publishing industry as somehow separate from the broader economic impact of making research open. Previous reports from the PA itself have shown that the link to the economy is much broader than this depiction. The UK’s R&D roadmap clearly embraces open research and rapid open dissemination as not just good for science and health (especially during the current pandemic) but as a core part of “a world-leading system that unlocks innovation and growth throughout all parts of the economy.” Open Access enables this entire ecosystem to operate more efficiently and cost-effectively. When coupled with Open Research, it brings the additional benefits of increasing trust in science, ever more important during these difficult times. 

    Open Research (of which Open Access is one part) offers even greater opportunity both in terms of discovery and economic growth. For example, the European Data Portal’s report on the economic impact of open data identifies the potential for 15.7% growth across a range of sectors – including scientific & technical, communication, transport, finance, health, retail and agriculture – and significant cost savings across the economy. We believe that a focus on Open Research, and all the efficiencies it brings, is worth considering for the UK economy especially in light of the combined impacts of the pandemic and Brexit (whatever one’s position on Brexit is).

    Furthermore, Open Access is the fastest growing publishing model. A recent report from the scholarly analytics firm, Dimensions, shows that “in 2020 […]more outputs were published through Open Access channels than traditional subscription channels globally.” The report further show that the majority of the growth in OA is fueled by Gold OA, i.e. OA supported by some kind of business or sustainability model. It is ever more futile, in our opinion, to take such a critical and isolated position against UKRI policy when one can argue that OA is an outcome that will simply continue to happen. It is far more beneficial, as the policy itself proposes, to establish the UK as an early adopter, leader, and beneficiary of OA and its opportunities. 

    Publishers can and should lead

    The benefits of OA, the opportunities created by OA, and the desire for OA by the public and funders like UKRI, are clear. Contrary to Section 4.44 of the report, which mischaracterizes PLOS as publishing “at cost”, it is entirely possible to maintain value for authors, funders, and  institutions while operating a sustainable, surplus-generating business model. PLOS believes that publishers globally should be leading the efforts to devise and develop the next generation of business models that are able to support their operations in an Open Access context. This will, of course, require deep, and sometimes difficult, work by transitioning publishers. But we strongly believe this work is not outside the acceptable effort level of conscientious members of the scholarly publishing industry that have been aware of Open Access, and its benefits, for at least the last 20 years. 

    We would like to thank the UKRI for its efforts to hear from stakeholders in the scholarly  publishing community, and beyond, regarding the proposed policy. We understand it is a discussion rife with passionate viewpoints and unseen complexities, and so we would like to ensure that one point is clear: to successfully set up the most efficient,  frictionless Open Access ecosystem, we should leverage the existing budgets and infrastructures of scholarly publishing but with OA as the outcome. This way, Open Access is not viewed as a destructive force, or something external and different that traditional publishers are not part of, but simply as the new way to publish and communicate research that all publishers can facilitate. 

    Sincerely,

    –PLOS

    The post Open Response to “Economic impact of UKRI open access policy” report appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 17, 2021 02:04 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Why not chemical castration (to escape COVID-19)?

    Male boldness causes COVID-19 death, go figure. This ridiculous quackery from Brazil is based on fudged clinical trials, sponsored by an obscure Californian hair loss business, and even Torello Lotti is on board!

    in For Better Science on March 16, 2021 11:19 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “A real win for researchers”: UC and Elsevier sign transformative agreement

    In new shared-funding model, University of California researchers can publish open access and read content across Elsevier’s extensive journal portfolio

    in Elsevier Connect on March 16, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 15 March 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please join us at the next regular Open NeuroFedora team meeting on Monday 15 March at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'
    

    The meeting will be chaired by @nerdsville. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on March 15, 2021 10:18 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    President’s Award for Outstanding Creative Activity

    En-Bing Lin "has published more than 80 peer-reviewed articles, received over $535,000 in grant funding and given 74 external presentations, including 14 keynote speeches at international mathematic conferences."

    in For Better Science on March 15, 2021 06:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Update: Name change policy

    arXiv is updating its name change policy to provide registered authors with more control over their online identities. This approach, which reduces barriers to changing public records, fosters diversity and reflects our inclusive ethos.

    arXiv now offers the following name change options:

    1. In full text works: the author name can be changed in the PDF and/or LaTeX source where it appears in the author list, acknowledgments, and email address
    2. In metadata: the name and email address can be changed in the author list metadata and in the submission history metadata for all existing versions
    3. In user accounts: the name, username, and email address can all be changed

    We are not currently able to support name changes in references and citations of works. Also, arXiv cannot make changes to other services, including third party search and discovery tools that may display author lists for papers on arXiv. arXiv will continue to evaluate and adapt its policies as best practices evolve, and to allow users to directly manage their identity, while maintaining discoverability.

    arXiv began discussing this issue in 2018 at the request of arXiv community members. We then consulted with arXiv’s advisory boards, in addition to librarians, publishers, and diversity experts. This latest update is an outgrowth of this work and reflects arXiv’s support for name changes.

    arXiv’s new policy aligns with guiding principles recently provided by the Committee on Publication Ethics (COPE), a global organization that aims to integrate ethical practices as a normal part of the publishing culture worldwide. The group expects to release more complete guidance on the issue later this year.

    If you would like to request a name change, please contact us through our user support portal or at help@arxiv.org.

    in arXiv.org blog on March 11, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    World Kidney Day 2021: The Goal of Life Participation

    Today, 11 March 2021, BMC Nephrology is celebrating World Kidney Day. This awareness campaign started in 2006 as a joint effort of the International Society of Nephrology and the International Federation of Kidney Foundations with the goal of increasing awareness of kidney disease worldwide.

    Themes over the years have highlighted the impact of risk factors of diabetes and obesity and focus on transplantation, and the health of women and children. This year, the theme will be the year of “Living Well with Kidney Disease” with the goal of patient empowerment and life participation. This is a striking reset in priorities. There has been a tremendous focus on outcomes as measured by lab values and hospitalizations rates with reimbursements affected positively or negatively when not meeting targets. Protocol developments by insurers and hospital systems and guidelines have reinforced the emphasis on data. This year’s theme reframes the care of individuals with kidney disease as improving outcomes to allow them to continue to participate in their lives. The theme also emphasizes that meeting laboratory targets and following protocols do not equate to fully taking care of the patient.

    Life participation is not something easily measured and cannot really be determined without the input of the patient. For some, it will be the ability to work, to participate in family activities, to vacation, or to control their symptoms. It puts patients and their caregivers at the center of their treatment plan and setting the goals of their care. To deliver this, healthcare providers will need to provide the education and support to empower patients and caregivers to be able to have a more active role and have discussions about what is important to them.

    This year’s World Kidney Day theme is a good reminder to all health providers that we are taking care of the individual not just treating a disease process.

    This year’s World Kidney Day theme is a good reminder to all health providers that we are taking care of the individual not just treating a disease process. Hearing our patient’s concerns, challenges, and what they value should be part of our routine. Kidney disease for many is a life-changing diagnosis. While we cannot change the diagnosis of kidney disease, addressing the anxiety and frustrations many patients have will help their overall care. The initiative of living well with kidney disease serves to remind nephrology providers that kidney disease is part of the patient’s life but should not be their entire life.  I would take this further that this may be a reminder for all of us taking care of individuals with a variety of health conditions. We need to take care of the individual, not just the disease.

    The post World Kidney Day 2021: The Goal of Life Participation appeared first on BMC Series blog.

    in BMC Series blog on March 11, 2021 08:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A Farewell to ALM, but not to article-level metrics!

    Author: Chris Haumesser, Director of Platform and Engineering at PLOS

    Understanding the impact of research is of vital importance to science. The altmetrics  movement is defined by its belief that research impact is best measured by diverse real time data about how a research artifact is being used and discussed. These metrics serve as “alternatives” to solely relying  on traditional shorthand assessments like citation counts and Journal Impact Factor. 

    PLOS was an early adopter and advocate of these metrics, with its internally-developed Article Level Metrics (ALM) platform among the earliest implementations to launch in 2009. By displaying data from multiple sources, PLOS ALM showed how an article was being read, downloaded, cited, discussed and shared, accessible from the article itself by anyone.

    In its decade of service to the PLOS community, ALM has helped blaze a trail for others to follow. Today altmetrics are so commonly taken for granted that the “alternative” moniker seems anachronistic – the ubiquity of these metrics a testament to the return on our investment in ALM.

    In fact, the altmetrics movement has been so successful that it has spawned a market of providers who specialize in collecting and curating metrics and metadata about how research outputs are used and discussed. 

    One of these services, in particular, has far outpaced the reach and capabilities of ALM, and PLOS is now excited to pass the baton of our altmetrics operations to the experts at Altmetric.

    Given the historical significance of ALM, it’s not an easy decision to say goodbye. But PLOS is as committed as ever to providing our community with the best possible data about how their research is changing the world. Altmetric’s singular focus on research metrics positions them to deliver on this promise with a breadth and depth that PLOS simply can’t match as an organization with competing priorities.

    Partnering with Altmetric will unlock data from many sources beyond ALM that were previously untracked. After an extensive analysis, PLOS is confident that Altmetric will provide increased coverage across the board for the vast majority of papers in our corpus. 

    Beginning today, the data displayed on the “Metrics” tab of our published articles will all come from Altmetric, and the “Media Coverage” tab of our published articles will link to Altmetric’s media coverage. Each article will also have a link to an Altmetric details page, which displays extensive detailed metrics for the article. 

    As part of this transition, authors may see their metrics change due to the change in data provider. We expect authors to see some metrics increase due to Altmetric’s increased coverage and new sources. However, we are retiring some areas of our metrics provision entirely. Unfortunately, our articles will no longer display PMC usage counts, as these were aggregated by ALM and will no longer be available. We will also be removing recent tweets from articles and retiring the ALM Reports service.

    Moving forward we will continue to evaluate the presentation of metrics on our articles and look for ways to integrate even more relevant data from Altmetric into our user experience.

    The post A Farewell to ALM, but not to article-level metrics! appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 10, 2021 03:01 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Minkowski Metric, Feature Weighting and Anomalous Cluster Initializing in K-Means Clustering

    This week on Journal Club session Deepak Panday will talk about a paper "Minkowski Metric, Feature Weighting and Anomalous Cluster Initializing in K-Means Clustering".


    This paper represents another step in overcoming a drawback of K-Means, its lack of defense against noisy features, using feature weights in the criterion. The Weighted K-Means method by Huang et al. (2008, 2004, 2005) [5, 7] is extended to the corresponding Minkowski metric for measuring distances. Under Minkowski metric the feature weights become intuitively appealing feature rescaling factors in a conventional K-Means criterion. To see how this can be used in addressing another issue of K-Means, the initial setting, a method to initialize K-Means with anomalous clusters is adapted. The Minkowski metric based method is experimentally validated on datasets from the UCI Machine Learning Repository and generated sets of Gaussian clusters, both as they are and with additional uniform random noise features, and appears to be competitive in comparison with other K-Means based feature weighting algorithms.


    Papers:

    Date: 2021/03/10
    Time: 14:00
    Location: online

    in UH Biocomputation group on March 10, 2021 03:01 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How to successfully expand your society’s journal portfolio

    See how one professional society has launched new journals to better serve its community

    in Elsevier Connect on March 10, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Highlights of the BMC Series – February 2021

    BMC Public Health

    Associations between occupation and heavy alcohol consumption in UK adults aged 40–69 years: a cross-sectional study using the UK Biobank

    Alcohol consumption and its associated consequences, including cancers and heart disease, remain to be a major public health challenge. Investigating the factors that contribute to alcohol consumption can help determine where to target intervention resources.

    The authors found strong associations between occupations and heavy alcohol consumption

    Researchers from the University of Liverpool, Thompson and Pirmohamed, investigated the association between occupation and heavy alcohol consumption in working individuals aged 40-69 years in the UK. Thompson and Pirmohamed used the UK Biobank to recruit participants.

    The authors found strong associations between occupations and heavy alcohol consumption, with jobs identified as skilled trades the most likely to be associated with heavy alcohol consumption. The largest ratios for heavy drinkers were observed for publicans and managers of licensed premises, industrial cleaning process occupations, and plasterers. Whereas clergy, physicists, geologists and meteorologists and medical practitioners were least likely to be heavy drinkers. The authors findings help to determine which employment sectors may benefit most from health promotion programs.

     

    BMC Anesthesiology

    A gap existed between physicians’ perceptions and performance of pain, agitation-sedation and delirium assessments in Chinese intensive care units

    This highlights the need for prompt quality improvement

    Pain, agitation-sedation and delirium (PAD) management are key elements in the care of critically ill patients. However, previous research has highlighted the gap between actual clinical practices and physicians’ attitudes to PAD management. Zhou et al. investigated the current practice of PAD assessments in Chinese ICUs by a one-day point prevalence study combined with an on-site questionnaire survey.

    Fig. 5 taken from Zhou et al.

    The authors concluded that the actual PAD assessment rate was suboptimal, especially with regards to delirium screening. There was a significant gap between the actual practice and the physicians’ perception of the practice. Physicians reported performing assessing pain and agitation-sedation in approximately only 20 to 25% of patients which is lower than previous reports. Therefore, the study highlights the need for prompt quality improvement and the optimization of practices of PAD management in ICUs in China.

     

    BMC Gastroenterology

    Impact of improvement of sleep disturbance on symptoms and quality of life in patients with functional dyspepsia

    Many patients with functional gastrointestinal disorders have sleep disturbance and this impacts their quality of life. However, it is not yet fully understood how sleep disturbance affects the pathophysiology of functional dyspepsia (FD). Kuribayashi et al. carried out a prospective study on 20 patients to investigate the relationship between FD and sleep disturbance. Patients took sleep aids for 4 weeks and filled out questionnaires before and after taking sleep aids.

    Sleep disturbance was significantly improved by 4-week administration of sleep aids

    The authors found that sleep disturbance was significantly improved by 4-week administration of sleep aids and as a result, GI symptoms, anxiety, and quality of life in patients with FD were also improved. In addition, the authors concluded that the use of sleep-inducing drugs was associated with reduced pain as well as improvement of dyspeptic symptoms in FD patients. Overall the study highlights the potential benefits of sleep aids for patients with FD and sleep disturbance, although multicenter studies involving a larger number of cases are needed to further investigate.

     

     

    BMC Research Notes

    Trunk and lower limb muscularity in sprinters: what are the specific muscles for superior sprint performance?

    Previous research has reported that many muscles of the trunk and lower limb were greater in sprinters than in non-sprinters. However, the specific muscles that contribute to superior sprint performance for sprinters have not been fully identified. Suga et al. examined the relationships between the trunk and lower limb muscle cross-sectional areas and sprint performance in well-trained male sprinters.

    Their findings showed that larger absolute and relative cross-sectional areas of the psoas major and gluteus maximus correlated with better personal best 100m sprint times. Therefore, the psoas major and gluteus maximus may be specific muscles for superior sprint performance for sprinters. The study also corroborates previous studies suggesting that the hamstring may not be an important muscle for achieving superior sprint performance.

     

    BMC Women’s Health

    Alleviating psychological distress associated with a positive cervical cancer screening result: a randomized control trial

    Cervical cancer is the fourth most common cancer among women globally and cytology-based (Pap smear) screening is important for early detection and treatment. Although cervical cancer screening is beneficial and can enable early detection, a positive screening result might cause psychological burden. As a result, this may influence the decision to undergo further examination and future screening for cervical cancer.

    Psychological distress appeared to be higher in the control group

    Isaka et al. carried out a randomized control trial in Japan, with the intervention being the provision of cervical cancer information and cervical cancer screening information through a leaflet. The authors aim was to evaluate whether the leaflet would help reduce psychological distress. Women who were about to undergo cervical cancer screening received hypothetical screening results either with or without a leaflet, at random. Following the intervention, psychological distress appeared to be higher in the control group than in the intervention group among those who received a hypothetical positive screening result. Therefore, the authors concluded that information provision might help reduce psychological distress and recommend that cervical cancer screening programs provide participants with all relevant information.

     

    The post Highlights of the BMC Series – February 2021 appeared first on BMC Series blog.

    in BMC Series blog on March 09, 2021 11:59 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Monsanto Papers: a wrong book on glyphosate

    Disappointed of Carey Gillam's book about Lawyers' Adventures, I blog about the Monsanto and glyphosate affair.

    in For Better Science on March 09, 2021 06:39 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Towards equal opportunity for prostate, breast and ovarian cancers

    "Are they just taking the piss?" -Smut Clyde

    in For Better Science on March 08, 2021 06:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Journal pilot appends corrections when articles are downloaded

    New program makes sure researchers know when the articles they cite have errata or updates

    in Elsevier Connect on March 05, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Why future “consciousness detectors” should look for brain complexity

    Imagine you are the victim of an unfortunate accident. A car crash, perhaps, or a fall down the stairs. After some time, you regain awareness. Sounds return, smells greet your nostrils, textures kiss your skin. Unable to move or speak, you lie helpless in your hospital bed. How would anyone know that you—your thoughts, feelings, and experiences—are still there?

    Doctors might look at your brainwave activity, as measured by an EEG device, to see if anyone’s home. What frequencies of activity does your brain produce? One brain rhythm your doctor might look for is called “delta.” This slow activity, one to four cycles per second, generally tells us that the cerebral cortex is offline and that the mind is plunged into unconsciousness. It is seen in dreamless sleep and under anesthesia.

    Delta activity might suggest to your doctor that no one is home, that you have no more awareness or feeling than someone in a dreamless sleep. But this assumption may be wrong. My recent research with Angelman syndrome, a rare disorder caused by dysfunction of the gene UBE3A, shows that children with this disorder who are unambiguously awake and conscious still show massive delta activity in their EEGs. These children have delayed development, intellectual disability, and often seizures. Some of them cannot communicate without gestures or special devices. But they can clearly feed themselves, play with other children, respond to environmental cues, and follow simple commands from their parents. In other words, there’s no question that they are conscious. So why are their brains generating delta activity, a signature of unconsciousness, while they are wide awake?

    The short answer is, nobody knows. But studying these children may be key to developing better biomarkers of consciousness that can be used to determine whether unresponsive brain-injured patients are covertly conscious—that is, conscious without the accompanying behaviors that usually tell us that someone is conscious.

    Besides delta activity, another EEG feature that relates to consciousness is the complexity of the EEG signal. This can be measured in various ways, such as looking for reoccurring patterns in the EEG signal or measuring how difficult the signal is to compress (on your computer, think about zipping an image of a Vermeer painting—complex—versus zipping an image of a Rothko painting—simple). We already know EEG complexity is diminished during dreamless sleep and anesthesia. Interestingly, it is also increased by psychedelic drugs that induce hallucinations. But in Angelman syndrome, where the rules of delta seem not to apply, will EEG complexity still track consciousness?

    My colleagues and I at the University of California, Los Angeles (UCLA) and collaborating institutions set out to test this hypothesis using EEGs recorded from 35 children with Angelman syndrome during both wakefulness and sleep. First, we asked whether the power of delta activity would change when the children fell asleep and, second, whether the EEG complexity would change when the children fell asleep.

    As we already expected, delta activity appeared very strong both when children with Angelman syndrome were awake and asleep. However, a thin slice of the delta spectrum was modulated by sleep. Specifically, the delta waves between one and two cycles per second were enhanced when the children napped. Delta waves between two and four cycles per second, on the other hand, did not show much modulation, as this portion of the delta spectrum appears to mostly reflect abnormalities in Angelman syndrome.

    But what about the EEG complexity? As it turns out, this biomarker of consciousness passed the Angelman syndrome test with flying colors. Both measures of EEG complexity based on reoccurring patterns and compression were greatly diminished when the children napped. While EEG activity is rich and varied when children with Angelman syndrome are awake, it becomes repetitive and predictable when they are asleep.

    The results of this experiment are highly encouraging: we can use EEG signal complexity to track consciousness, even under circumstances where brain activity is highly abnormal. Of course, children with Angelman syndrome look very different from the patient who you imagined yourself as at the beginning of this post. But the bottom line is that we may never find universal biomarkers of consciousness by only studying healthy brains. Looking at many different brains—those that are healthy and those that are affected by various diseases—and searching for universal patterns is a better strategy, one that is more likely to translate to severely brain-injured patients in the intensive care units of hospitals.

    Given the overwhelming evidence relating EEG signal complexity to consciousness, I believe this metric will likely be used in clinics in the near future to measure patients’ level of consciousness. However, some debate still exists regarding the best context for taking these measurements. Is it better to look at the EEG complexity in response to magnetic brain stimulation, thus revealing the brain’s capacity to respond to a gentle “push”? Or it is better to look at the spontaneous EEG complexity, when the brain is minding its own business, so to speak? In our study, we took the latter approach, which is certainly easier in young children with neurodevelopmental disorders. However, the former approach, which uses a coil of wire held above the head to stimulate the brain noninvasively, has accumulated very strong evidence in the past decade as a nearly perfect measure of conscious state.

    The above technique, called the perturbational complexity index or PCI, has never been studied in Angelman syndrome. We do not know yet whether the abnormal brain dynamics seen in children with Angelman syndrome would trick PCI into giving a false negative result (that the children are unconscious when they are in fact awake and conscious), or if PCI would ace the test. But there is only one way to find out. The next step in this research will be to test PCI in children with Angelman syndrome. If it succeeds, then we are one step closer to building a “consciousness detector” for clinical use. If not, then back to the drawing board.

    The post Why future “consciousness detectors” should look for brain complexity appeared first on OUPblog.

    in OUPblog - Psychology and Neuroscience on March 04, 2021 02:45 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: James Knight, Thomas Nowotny: GeNN

    The GeNN simulator

    James Knight and Thomas Nowotny will introduce the GeNN simulation environment and discuss its development in this dev session.

    The abstract for the talk is below:

    Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this dev session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework [1], which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. The Python interface has enabled us to develop a PyNN [2] frontend and we are also working on a Keras-inspired frontend for spike-based machine learning [3].

    In the session we will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it works inside. We will then talk in more depth about its development with a focus on testing for GPU dependent software and some of the further developments such as Brian2GeNN [4].

    in INCF/OCNS Software Working Group on March 04, 2021 12:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Spatiotemporal Dynamics of the Brain at Rest Exploring EEG Microstates as Electrophysiological Signatures of BOLD Resting State Networks

    This week on Journal Club session David Haydock will talk about a paper "Spatiotemporal Dynamics of the Brain at Rest Exploring EEG Microstates as Electrophysiological Signatures of BOLD Resting State Networks".


    Neuroimaging research suggests that the resting cerebral physiology is characterized by complex patterns of neuronal activity in widely distributed functional networks. As studied using functional magnetic resonance imaging (fMRI) of the blood-oxygenation-level dependent (BOLD) signal, the resting brain activity is associated with slowly fluctuating hemodynamic signals (10 s). More recently, multimodal functional imaging studies involving simultaneous acquisition of BOLD-fMRI and electroencephalography (EEG) data have suggested that the relatively slow hemodynamic fluctuations of some resting state networks (RSNs) evinced in the BOLD data are related to much faster (100 ms) transient brain states reflected in EEG signals, that are referred to as "microstates".

    To further elucidate the relationship between microstates and RSNs, we developed a fully data-driven ap- proach that combines information from simultaneously recorded, high-density EEG and BOLD-fMRI data. Using independent component analysis (ICA) of the combined EEG and fMRI data, we identified thirteen mi- crostates and ten RSNs that are organized independently in their temporal and spatial characteristics, respec- tively. We hypothesized that the intrinsic brain networks that are active at rest would be reflected in both the EEG data and the fMRI data. To test this hypothesis, the rapid fluctuations associated with each microstate were correlated with the BOLD-fMRI signal associated with each RSN.

    We found that each RSN was characterized further by a specific electrophysiological signature involving from one to a combination of several microstates. Moreover, by comparing the time course of EEG microstates to that of the whole-brain BOLD signal, on a multi-subject group level, we unraveled for the first time a set of microstate-associated networks that correspond to a range of previously described RSNs, including visual, sensorimotor, auditory, attention, frontal, visceromotor and default mode networks. These results extend our understanding of the electrophysiological signature of BOLD RSNs and demonstrate the intrinsic connec- tion between the fast neuronal activity and slow hemodynamic fluctuations.


    Papers:

    Date: 2021/03/05
    Time: 14:00
    Location: online

    in UH Biocomputation group on March 03, 2021 06:05 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Societies' Update

    Information for reviewers about relevant Elsevier and industry developments, support and training.

    in Elsevier Connect on March 03, 2021 11:37 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Coronil and other peer-reviewed Ayurvedic scams

    "ऑथर ने किसी हित संघर्ष की घोषणा नहीं की है"

    in For Better Science on March 03, 2021 09:13 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Ghanaian chemist is finding toxic substances in unusual places

    Through her research and science diplomacy, an environmental chemist is changing the narrative in her native Ghana

    in Elsevier Connect on March 03, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Erdogan’s academic elites

    Önder Metin had a rogue PhD student whom he trusted "to ensure their academic growth". But "mistakes were made by mistake", conclusions are never affected. Yet those who still complain, will pay dearly.

    in For Better Science on March 02, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Exploring the growing impact of Chinese research via open access

    As KeAi launches its 100th international OA journal, experts behind the Beijing-based CSPM-Elsevier partnership explain how quality and visibility are driving change

    in Elsevier Connect on March 02, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Announcing the PLOS Scientific Advisory Council

    Author: Veronique Kiermer, Chief Scientific Officer

    We are delighted to announce the creation of the PLOS Scientific Advisory Council, a small group of active researchers with diverse perspectives to help us shape our efforts to promote Open Science, globally.

    PLOS, as a non-profit organization committed to empowering researchers to change research communication, cannot successfully pursue its mission without listening to the research community. We regularly survey and consult the research communities we work with, formally and informally, to guide our choices and developments. The organization’s governance, including our Board of Directors, has always involved active researchers. And we derive great insight and advice from our continuous exchange with the academic Editors of PLOS journals. 

    We’ve decided to take an additional formal step and create a forum where the researchers who contribute to PLOS through different channels can interact directly with each other, and to ensure that this forum includes voices from researchers around the globe. 

    We’ve created the PLOS Scientific Advisory Council, a small group of researchers who represent varied scientific and career perspectives, who will advise PLOS executive and editorial leadership on strategic questions of scientific interest. 

    At this point, the Scientific Advisory Council is deliberately small–about a dozen individuals–to ensure in-depth discussions and engagement, but we’ve strived to include different disciplinary interests, career stages and geographic representation. The group includes four PLOS Board members who are themselves active researchers, two of our journals’ academic Editors-in-Chief, alongside researchers who are not formally associated with PLOS. 

    We are delighted to welcome the following members to the PLOS Scientific Advisory Council. To see their photos and full bios, please visit their page on our website: 

    Sue Biggins
    Fred Hutchinson Cancer Research Center and University of Washington, Seattle, USA

    Yung En Chee
    University of Melbourne, Australia

    Gregory Copenhaver
    University of North Carolina, Chapel Hill, USA

    Abdoulaye A. Djimde
    University of Bamako, Mali

    Robin Lovell-Badge
    The Francis Crick Institute, London, UK

    Direk Limmathurotsakul
    Mahidol University, Bangkok, Thailand

    Meredith Niles
    University of Vermont, Burlington, USA

    Jason Papin
    University of Virginia, Charlottesville, USA

    Simine Vazire (Chair)
    University of Melbourne, Australia

    Keith Yamamoto
    University of California, San Francisco, USA

    Veronique Kiermer (Secretary, ex officio)
    PLOS

    The post Announcing the PLOS Scientific Advisory Council appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 01, 2021 08:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 1 March 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please join us at the next regular Open NeuroFedora team meeting on Monday 1 March at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'
    

    The meeting will be chaired by @major. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on March 01, 2021 09:48 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Alcohol Policies, Firearm Policies, and Suicide in the United States

    Background

    Alcohol and firearms are a dangerous combination and are commonly involved in suicide in the United States. This has only increased in importance as a public health issue as alcohol drinking, firearm sales, and suicides in the United States have all increased since the start of the covid-19 pandemic. State alcohol policies and state firearm policies might impact alcohol and firearm related suicide, but it is unknown how these policies specifically relate to these suicides, or how these policies might interact with one another.

    The study

    We conducted a cross-sectional study to assess relationships between alcohol policies, firearm policies, and U.S. suicides in 2015 involving alcohol, firearms, or both. We used the Alcohol Policy Scale, previously created and validated by our team, to assess alcohol policies and the Gun Law Scorecard from Giffords Law Center to quantify firearm policies. Suicide data came from the National Violent Death Reporting System. State- and individual-level GEE Poisson and logistic regression models assessed relationships between policies and firearm- and/or alcohol-involved suicides with a 1-year lag.

    Results

    Higher alcohol and gun law scores were associated with reduced incidence rates and odds of suicides involving either alcohol or firearms.

    In the United States in 2015, alcohol and/or firearms were involved in 63.9% of suicides. Higher alcohol and gun law scores were associated with reduced incidence rates and odds of suicides involving either alcohol or firearms. For example, a 10% increase in alcohol policy score was associated with a 28% reduction in the rate of suicides involving alcohol or firearms. Similarly, a 10% increase in gun policy score was associated with a 14% decrease in the rate of suicides involving firearms.

    These relationships were similar for suicides that involved alcohol and firearms. For example, a 10% increase in alcohol policy score was associated with a 52% reduction in the rate of suicides involving alcohol and firearms. A 10% increase in gun policy score was associated with a 26% reduction in the rate of suicides involving alcohol and firearms.

    In addition, we found synergistic effects between alcohol and firearm policies, such that states with restrictive policies for both alcohol and firearms had the lowest odds of suicides involving alcohol and firearms.

    Conclusions and next steps

    Results of the study suggest that laws restricting firearms ownership among high-risk individuals, including those who drink excessively or have experienced alcohol-related criminal offenses, may reduce firearm suicides.

    We found restrictive alcohol and firearm policies to be associated with lower rates and odds of suicides involving alcohol or firearms, and alcohol and firearms and our research suggests that alcohol and firearm policies may be a promising means by which to reduce suicide. These protective relationships were particularly striking for suicides involving both alcohol and firearms as well as in the strong protective interaction between alcohol and firearm policy variables, particularly for suicides involving alcohol. These findings, taken in the context of the broader literature, also suggest that laws restricting firearms ownership among high-risk individuals (so-called ‘may issue’ laws), including those who drink excessively or have experienced alcohol-related criminal offenses, may reduce firearm suicides.

    Because this was a cross-sectional analysis, this should be considered a hypothesis-generating study that cannot prove a causal association between alcohol or firearm policies and suicide. In future research, studies using multiple years of policy and suicide data would strengthen causal inference.

    Stronger alcohol and firearm policies are a promising means to prevent a leading and increasing cause of death in the U.S. The findings further suggest that strengthening both policy areas may have a synergistic impact on reducing suicides involving either alcohol, firearms, or both.

    The post Alcohol Policies, Firearm Policies, and Suicide in the United States appeared first on BMC Series blog.

    in BMC Series blog on March 01, 2021 07:11 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Overview of 'The Spike': an epic journey through failure, darkness, meaning, and spontaneity

    from Princeton University Press (March 9, 2021)


    THE SPIKE is a marvelously unique popular neuroscience book by Professor Mark Humphries, Chair of Computational Neuroscience at the University of Nottingham and Proprietor of The Spike blog on Medium. Humphries' novel approach to brain exposition is built around — well — the spike, the electrical signal neurons use to communicate. In this magical rendition, the 2.1 second journey through the brain takes 174 pages (plus Acknowledgments and Endnotes).

    I haven't read the entire book, so this is not a proper book review. But here's an overview of what I might expect. The Introduction is filled with inventive prose like, “We will wander through the splendor of the richly stocked prefrontal cortex and stand in terror before the wall of noise emanating from the basal ganglia.” (p. 10).


    Did You Know That Your Life Can Be Reduced To Spikes?

    Then there's the splendor and terror of a life reduced to spikes (p. 3):

    “All told, your lifespan is about thirty-four billion billion cortical spikes.”


    Spike Drama

    But will I grow weary of overly dramatic interpretations of spikes? “Our spike's arrival rips open bags of molecules stored at the end of the axon, forcing their contents to be dumped into the gap, and diffuse to the other side.” (p. 29-30).

    Waiting for me on the other side of burst vesicles are intriguing chapters on Failure (dead end spikes) and Dark Neurons, the numerous weirdos who remain silent while their neighbors are “screaming at the top of [their] lungs.” (p. 83). I anticipate this story like a good mystery novel with wry throwaway observations (p. 82):

    “Neuroimaging—functional MRI—shows us Technicolor images of the cortex, its regions lit up in a swirling riot of poorly chosen colors that make the Pantone people cry into their tasteful coffee mugs.”


    Pantone colors of 2021 are gray and yellow

     

    Wherever it ends up – with a mind-blowing new vision of the brain based on spontaneous spikes, or with just another opinion on predictive coding theory – I predict THE SPIKE will be an epic and entertaining journey. 

     


    in The Neurocritic on February 28, 2021 09:45 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Invisibility of COVID-19

    Why is it so hard to picture COVID-19?

    in Discovery magazine - Neuroskeptic on February 28, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dialogue With Dreamers

    Researchers claim that they can ask questions and receive answers from dreaming participants.

    in Discovery magazine - Neuroskeptic on February 27, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Why arXiv needs a brand

    Pop quiz: There are so many “-Xivs” online and on social media.  Which ones are operated by arXiv?

    Answer: only arXiv.org and @arXiv

    arXiv is a highly valued tool known primarily through its digital presence. However, the use of arXiv’s name by other services has led to confusion. And despite decades of reliable service, arXiv’s inconsistent visuals and voice have projected an air of neglect. This jeopardizes our ability to raise funds for critical improvements.

    “As the role of arXiv in open science becomes more evident, its value should be made obvious lest we end up losing the system we cherish and rely upon so much,” said Alberto Accomazzi, PhD, ADS Program Manager.

    In 2020, Accomazzi joined nine other diverse community members to become part of an advisory group formed to support arXiv’s communications and identity project. The goal? To ensure that the way we present ourselves to the world reflects arXiv’s true nature as a professional, innovative tool created by and for researchers.

    Throughout the identity project, we:

    • assessed user feedback collected since 2016,
    • surveyed board members and 7,000 additional users about their perceptions of arXiv,
    • gathered ten diverse community members to serve as advisors,
    • contracted with a professional designer to produce a logo, and
    • are working with an accessibility consultant to address the needs of all arXiv readers and authors.

    To guide our branding efforts we focused on arXiv as a place of connection, linking together people and ideas, and connecting them with the world of open science. After many rounds of revision and refinement, arXiv’s first brand guide was produced, in addition to our new logo and usage guidelines, and we’d like to share them with you now.

    arXiv's logoThe intertwining arms at the heart of the logo represent arXiv as a place of connection

    The arXiv logo looks to the future and nods to the past with a font that pays homage to arXiv’s birth in the 90’s while also being forward looking. The arms of the ‘X’ retain stylistic elements of the ‘chi’ in our name, with a lengthened top left and lower right branch. Symbolically, the intertwining of the arms at the heart of the logo captures the spirit of arXiv’s core value. arXiv is a place of connection, linking together people and ideas, and connecting them with the world of open science.

    The brand guide and usage guidelines ensure that we express arXiv’s identity with consistent quality and continuity across communication channels. By strengthening our identity in this way, arXiv will be recognizable and distinct from other services. Staff will save time by having access to clear, consistent guidelines, visual assets, and style sheets, and collaborators will know the expectations regarding arXiv logo usage.

    The arXiv community will notice that the main arXiv.org site remains the same at this time. That’s because the identity rollout and implementation process will be gradual, starting with official documents before moving to core arXiv services.

    “arXiv must take control of its identity to maintain its place and grow within the scholarly communications ecosystem,” said arXiv’s executive director Eleonora Presani, PhD.

    in arXiv.org blog on February 26, 2021 03:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Know Your Brain: Red Nucleus

    The red nuclei are colored red in this cross-section of the midbrain.

    The red nuclei are colored red in this cross-section of the midbrain.

    Where is the red nucleus?

    The red nucleus is found in a region of the brainstem called the midbrain. There are actually two red nuclei—one on each side of the brainstem.

    The red nucleus can be subdivided into two structures that are generally distinct in function: the parvocellular red nucleus, which mainly contains small and medium-sized neurons, and the magnocellular red nucleus, which contains larger neurons. The red nucleus is recognizable immediately after dissection because it maintains a reddish coloring. This coloring is thought to be due to iron pigments found in the cells of the nucleus.

    What is the red nucleus and what does it do?

    As mentioned above, the red nucleus can be subdivided into two structures with separate functions: the parvocellular red nucleus and the magnocellular red nucleus. In the human brain, most of the red nucleus is made up of the parvocellular red nucleus, or RNp; the magnocellular red nucleus (RNm) is not thought to play a major role in the adult human brain. In four-legged mammals (e.g., cats, mice), however, the RNm is a more prominent structure—in both size and importance.

    Neurons from the cerebellum project to the RNm, and RNm neurons leave the red nucleus and form the rubrospinal tract, which descends in the spinal cord. In animals that walk on four legs, this pathway is activated around the time of voluntary movements; it seems to play an important role in walking, avoiding obstacles, and making coordinated paw movements. RNm neurons, however, also respond to sensory stimulation, and may provide sensory feedback to the cerebellum to help guide movements and maintain postural stability.

    In primates that mainly walk on two legs (including humans), the RNm is not thought to play a large role in walking and maintaining postural stability, as other tracts (e.g., the corticospinal tract) take over such functions. The RNm, however, does appear to be involved in controlling hand movements in humans and other primates. Interestingly, the RNm is more prominent in the human fetus and newborn, but regresses as a child ages, which may have to do with the development of corticospinal tract and the ability to walk on two legs.

    Despite its relatively greater import in the human brain, the RNp is poorly understood, as its diminished presence in other animals makes it more difficult to study using an animal model. Neurons from motor areas in the prefrontal cortex and premotor cortex, as well as neurons from nuclei in the cerebellum known as the deep cerebellar nuclei, extend to the RNp. There is also a collection of neurons that leave the RNp and travel to the inferior olivary nucleus, which communicates with the cerebellum and is thought to be involved in the control of movement. A number of proposed functions have been attributed to these connections between the RNp, cerebellum, and inferior olivary nucleus, such as movement learning, the acquisition of reflexes, and the detection of errors in movements. But the precise function of these pathways—and thus the RNp’s role in them—is still not clear.

    Several studies have found the red nucleus to play a role in pain sensation as well as analgesia. The latter might be due to connections between the red nucleus and regions like the periaqueductal gray and raphe nuclei, which are part of a natural pain-inhibiting system in the brain.

    In terms of pathology, dysfunction in the human red nucleus has been linked to the development of tremors, and is being investigated as playing a potential role in Parkinson’s disease. Damage to the red nucleus has also been associated with a number of other problems with movement and muscle tone.

    References (in addition to linked text above):

    Basile GA, Quartu M, Bertino S, Serra MP, Boi M, Bramanti A, Anastasi GP, Milardi D, Cacciola A. Red nucleus structure and function: from anatomy to clinical neurosciences. Brain Struct Funct. 2021 Jan;226(1):69-91. doi: 10.1007/s00429-020-02171-x. Epub 2020 Nov 12. PMID: 33180142; PMCID: PMC7817566.

    Vadhan J, M Das J. Neuroanatomy, Red Nucleus. 2020 Jul 31. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2020 Jan–. PMID: 31869092.

    in Neuroscientifically Challenged on February 26, 2021 11:26 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Preventing and controlling water pipe smoking

    Water pipe smoking as a public health crisis

    Water Pipe Smoking (WPS) accounts for a significant and growing share of tobacco use globally. WPS is a culture-based method of tobacco use and it has experienced a worldwide re-emergence since 1990 and is regaining popularity among different groups of populations, especially in school and university students. Similarly, WPS is also prevalent among highly educated groups. Although WPS is most prevalent in Asia, specifically the Middle East region and Africa, it has now been changed to a rapidly emerging problem in other continents such as Europe, North, and South America.

    WP business has remained largely unregulated and uncontrolled, which may result in the increasing prevalence of WPS.

    It has been shown that WPS’ smoking rate can be more addictive compared to that of the cigarette. It has a huge negative impact on populations’ health, health costs and the gross domestic product of the countries. WP business has remained largely unregulated and uncontrolled, which may result in the increasing prevalence of WPS. Using deceptive advertising, many cafes and restaurants offer WP services along with their orthodox services in order to earn more profit and lure more customers. The provision of flavored tobacco products or psychotropic WP, the proximity of WP cafe to the public settings such as educational or residential settings and sports clubs, tempting decoration, the provision of study places for students, live music, a variety of games and gambling, and the possibility of watching live movie and sport matches are the factors contribute to attracting children and adolescents to WP cafes.

    The importance of our study

    Despite the concerns about WPS outcomes and nearly three decades of using control measures, the prevalence of WPS has increased over the world. Due to the unique nature of WP (multi-components), little is known about the prevention and control of WPS. Thus, special interventions might be required to prevent and control WP tobacco use. Accordingly, our study published in BMC Public Health aimed to identify the management interventions in international and national levels for preventing and controlling water pipe smoking.

    Our study

    We conducted a systematic literature review. Studies aiming at evaluating, at least, one intervention in preventing and controlling WPS were included in this review, followed by performing the quality assessment and data extraction of eligible studies by two independent investigators.

    After deleting duplications, 2228 out of 4343 retrieved records remained and 38 studies were selected as the main corpus of the present study. The selected studies focused on 19 different countries including the United States (13.15 %), the United Kingdom (7.89 %), Germany (5.26 %), Iran (5.26 %), Egypt (5.26%), Malaysia (2.63%), India (2.63%), Denmark (2.63%), Pakistan (2.63%), Qatar (2.63%), Jordan(2.63%), Lebanon (2.63%), Syria (2.63%), Turkey (2.63%), Bahrain (2.63%), Israel (2.63%), the United Arab Emirates (2.63%), Saudi Arabia (2.63%), and Switzerland (2.63%). Additionally, the type of study design included cross-sectional (31.57 %), quasi-experimental (15.78 %) and qualitative types (23.68 %).

    Interventions that were identified from the content analysis process were discussed and classified into relevant categories. We identified 27 interventions that were grouped into four main categories including preventive (5,18.51%) and control (8, 29.62%) interventions, as well as the enactment and implementation of legislations and policies for controlling WPS at national (7, 25.92%) and international (7, 25.92%) levels. The interventions are shown in the following table.

    Table: Effective Interventions in Preventing and Controlling Water Pipe Smoking

    Study implications

    The current enforced legislations are old, unclear, incompatible with the needs of the adolescents and are not backed by rigorous evidence.

    In general, our findings indicated WPS related social and health crisis have not come into attention in high levels of policy making. The current enforced legislations are old, unclear, incompatible with the needs of the adolescents and are not backed by rigorous evidence. In addition, the WP industry is rapidly expanding without monitoring and controlling measures. Informing and empowering adolescents for those who have not yet experienced smoking is a sensible intervention in this regard. Besides, empowering and involving health students and professionals in WPS control programs can lead to promising results in preventing and controlling WPS. It seems that there is a paucity of evidence regarding strategies on controlling and preventing WTS, thus further research in the society is warranted in this respect.

    The post Preventing and controlling water pipe smoking appeared first on BMC Series blog.

    in BMC Series blog on February 26, 2021 07:33 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The guardians of Scopus

    Here’s how independent subject experts monitor the titles in Scopus to uncover predatory journals

    in Elsevier Connect on February 26, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Interactions between Brassica Napus and Extracellular Fungal Pathogen Pyrenopeziza Brassicae

    This week on Journal Club session Chinthani Karandeni Dewage will talk about a paper "Interactions between Brassica Napus and Extracellular Fungal Pathogen Pyrenopeziza Brassicae".


    Light leaf spot (Pyrenopeziza brassicae) is currently the most damaging foliar disease on winter oilseed rape (Brassica napus) in the UK. Deployment of cultivar resistance remains an important aspect of effective management of the disease. Nevertheless, the genetic basis of resistance remains poorly understood and no B. napus resistance (R) genes against P. brassicae have been cloned. In this talk, I will be presenting the findings from my research on host resistance against P. brassicae and specific interactions in this pathosystem. New possibilities offered by genomic approaches for rapid identification of R genes and pathogenicity determinants will also be discussed.


    Papers:

    • Chinthani Karandeni Dewage et. al, "Interactions between Brassica Napus and Extracellular Fungal Pathogen Pyrenopeziza Brassicae" , 2021, in preparation

    Date: 2021/02/25
    Time: 14:00
    Location: online

    in UH Biocomputation group on February 25, 2021 10:29 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Using mathematics to solve practical problems? It’s elementary.

    Math sleuth Khongorzul Dorjgotov honed her problem-solving skills on Sherlock Holmes

    in Elsevier Connect on February 25, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How collaboration changes the world for people with rare diseases

    In recognition of #RareDiseaseDay, Elsevier is making select research articles and book chapters freely available for two months

    in Elsevier Connect on February 25, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Darwin’s theory of agency: back to the future in evolutionary science?

    Was Darwin a one-trick pony? The scientists who most praise him typically cite just one of his ideas: natural selection. Do any know that his theory of evolution—like his take on psychology—presumed all creatures were agents? This fact has long been eclipsed by the “gene’s-eye view” of adaptation which gained a strangle-hold over biology during the twentieth century—and hence over sociobiology and today’s evolutionary psychology. Are current efforts to revise this view—emphasising “new” topics like the flexibility of phenotypes (an organism’s living characteristics) and the importance of development in adaptation—simply rediscovering Darwin’s approach?

    How do members of a species come to differ from each other, thus furnishing the raw material for the struggle whose results Darwin subsumed under what he called the law of natural selection? In two ways, wrote Darwin in 1859. A creature’s parents transmit different characteristics to them as starting-points for their lives. And, over their life-course, creatures develop their starting-characteristics in different ways, depending on how they respond to the various challenges they meet. In 1942, Julian Huxley and his colleagues recast natural selection as a mechanism, not a law. Imagine twinned roulette-wheels: each individual’s evolutionary fate resulted from a random genetic provision of phenotypic traits being pitted against a lottery of environmental events. This new language of genes, genomes, and (from 1953) DNA, made Darwin look ignorant about how individual differences arose. Dying before genes or DNA were discovered, he never knew that mutations and chromosome-changes caused all variations (according to Huxley).

    To retain Darwin as the figurehead of this gene-first take on nature, he was retro-fitted with twentieth-century beliefs. If DNA “programmes” what creatures are and do, genetic processes drive evolution forward, not creaturely acts. Hence, Huxley asserted, the “great merit” of Darwin was his proof that living organisms never acted purposively: everything they did could be accounted for “on good mechanistic principles.” Likewise, even when Harvard biologist Richard Lewontin spoke out against the way the cult of genes blinds us to the active role organisms play in adaptation, his first target was Darwin, who, Lewontin said, portrayed organisms “as passive objects moulded by the external force of natural selection.”

    Lewontin’s vouch that organisms help shape their own fates is now gaining traction. Few genes behave as they should according to Mendel. So, talk about genes “for” a phenotypic character is rarely appropriate. Even when we know everything we can about genes and environment, we still cannot predict what characteristics will emerge in an organism—proving phenotypes are independent sources of plasticity in the genesis of adaptations. Organisms help cause their own development and destiny, which means phenotypes themselves have evolutionary effects. Never mind why a fly hatches from its pupa to be small: its unusually big surface to volume ratio cannot help but shape its remaining life-history. This point underlines findings from biologists who study animal behaviour. Beavers build dams, chimps and crows make tools, wolves hunt better in packs. All such feats alter the evolutionary prospects of phenotypes.

    Darwin’s books herald all these emphases: the distinction between transmission and development when discussing inheritance; the “plasticity of organisation” in all creatures; and, importantly here, the tie between action and structure. Darwin saw nature as a theatre of agency. The roots of cabbage seedlings successfully improvised, after Darwin experimentally blocked them from plunging straight down into the earth. Earthworms “intelligently” grasped how best to tug his artificially-shaped “leaves” to plug their holes against the cold. And when newly-arrived finches were competing for food on the Galapagos Islands, it must have been the birds who first found the best new diets—not random genetic changes—who gained reproductive supremacy and consequently, over millennia, such new bodily adaptations as the skin-piercing beak of blood-sucking Vampire Finches.

    Actions produce reactions, The Origin of Species repeatedly reminds us. Which means an organism’s actions inevitably render it interdependent with its habitat, animate and inanimate. Such ties may be competitive or cooperative—“mutual aid” being the hallmark of evolution in “social animals” like us. Hence, when Darwin published his views on human agency in The Descent of Man—first published 150 years ago this month—and its sequel, The Expression of the Emotions (1872), social interdependency took pride of place. Darwin argued non-verbal expressions to be purposeless by-products of functional habits—we weep because we protectively close our eyes when screaming, incidentally squeezing our tear-glands. Such unintended side-effects only come to signify emotion because others “recognize” their meaning as linked to suffering. When I blush, my inbuilt capacity for reading expressions has rebounded, leading me to read in you how I imagine you to be reading me. Such “self-attention” underpins sexual display, plus such quintessentially human traits as language, culture, and conscience.

    Go back to what Darwin wrote about evolution, and you will hear him speaking from a place that the latest biology now renders prescient. Interdependencies of agency not only forge individual differences, and winnow the kernels of inter-generational success from the chaff of failure. They also compose Darwin’s unsung creation of the first naturalistic psychology.

    Featured image by Pat Josse

    The post Darwin’s theory of agency: back to the future in evolutionary science? appeared first on OUPblog.

    in OUPblog - Psychology and Neuroscience on February 24, 2021 01:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    PLOS and Yale University Announce Publishing Agreement

    Yale University posted the following announcement on its website on February 23, 2021

    Yale University Library has signed two innovative agreements that will allow Yale-affiliated authors to publish in any PLOS open access journal without paying article processing charges (APCs).

    PLOS is a non-profit, open access publisher of seven highly respected scientific and medical journals. Last year Yale authors published more than 100 articles in PLOS journals, with APCs of up to $3,000 per article. Effective Jan. 1, 2021, these author-paid APCs will be eliminated and replaced with annual fees paid by the library. The authors will maintain copyright ownership of their research.

    “Our goal is to make open access publishing a more viable option for more Yale researchers in science and medicine, and to support a publication model that will also encourage open access publishing beyond Yale,” said Barbara Rockenbach, the Stephen Gates ’68 University Librarian.

    Open access publishing has grown in popularity since the 1990s when peer-reviewed journals began publishing online with a traditional business model based on limited access and high subscription fees. Open access developed as an alternative to make new research quickly and widely available with financial support from those producing the research. However, financial support for APCs from academic departments, government, and other research funders has varied widely, with some authors having to pay from personal funds.

    The library agreements will eliminate APCs for Yale authors publishing in PLOS Biology, PLOS Medicine, PLOS One, PLOS Computational Biology, PLOS Pathogens, PLOS Genetics, and PLOS Neglected Tropical Diseases, as well as in any new PLOS publications launched during the contract term. The initial agreements are for three years and will be funded through Yale Library’s Collection Development department with support from the Cushing/Whitney Medical Library.

    “We are pleased that Yale Library can support this emerging, more sustainable model of open-access publishing,” Rockenbach said. “We are committed to facilitating equitable access to research in science and medicine–and the progress research fuels.”

    The post PLOS and Yale University Announce Publishing Agreement appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 23, 2021 04:10 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Temporal dispersion of spike rates from deconvolved calcium imaging data

    On Twitter, Richie Hakim asked whether the toolbox Cascade for spike inference (preprint, Github) induces temporal dispersion of the predicted spiking activity compared to ground truth. This kind of temporal dispersion had been observed in a study from last year (Wei et al., PLoS Comp Biol, 2020; also discussed in a previous blog post), suggesting that analyses based on raw or deconvolved calcium imaging data might falsely suggest continuous sequences of neuronal activations, while the true activity patterns are coming in discrete bouts.

    To approach this question, I used one of our 27 ground truth datasets (the one recorded for the original GCaMP6f paper). From all recordings in this data set, I detected events that exceeded a certain ground truth spike rate. Next, I assigned these extracted events in 3 groups and systematically shifted the detected event of groups 1 and 3 by 0.5 seconds forth and back. Note that this is a short shift compared to the timescale investigated by the Wei et al. paper. This is how the ground truth looks like. It is clearly not a continuous sequence of activations:

    To evaluate whether the three-bout pattern would result in a continuous sequence after spike inference, I just used the dF/F recordings associated with above ground truth recordings and Cascade’s global model for excitatory neurons (a pretrained network that is available with the toolbox), I infered the spike rates. There is indeed some dispersion due to the difficulty to infer spike rates from noisy data. But the three bouts are very clearly visible.

    This is even more apparent when plotting the average spike rate across neurons:

    Therefore, it can be concluded that there are conditions and existing datasets where discrete activity bouts can be clearly distinguished from sequential activations based on spike rates inferred with Cascade.

    This analysis was performed on neurons at a standardized noise level of 2% Hz-1 (see the preprint for a proper definition of the standardized noise level). This is a typical and very decent noise level for population calcium imaging. However, if we perform the same analysis on the same data set but with a relatively high noise level of 8% Hz-1, the resulting predictions are indeed much more dispersed, since the dF/F patterns are too noisy to make more precise predictions. The average spike rate still shows three peaks, but they are only riding on top of a more broadly distributed, seemingly persistent increase of the spike rate.

    If you want to play around with this analysis with different noise levels or different data sets, you do not need to install anything. You can just, within less than 5 minutes, run this Colaboratory Notebook in your browser and reproduce the above results.

    in Peter Rupprecht on February 23, 2021 01:08 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    With a successful medical career, this researcher pursues his dream job

    Dr Seng Cheong Loke is designing an augmented reality app to help older people communicate with their families

    in Elsevier Connect on February 23, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dynamique et contrôle épidémique : quelques concepts simples

    Dans ce texte, j’essaie d’expliquer quelques concepts simples concernant la dynamique et le contrôle d’une épidémie. Je l’écris bien sûr en pensant à l’épidémie de Covid-19, mais la plupart des concepts sont généraux. En préambule, je tiens à préciser que je ne suis ni médecin ni épidémiologiste, donc je ne parlerai pratiquement pas des aspects purement médicaux, ni de subtilités d’épidémiologie, mais simplement de quelques notions générales. Ma spécialité est la modélisation de phénomènes dynamiques en biologie, en l’occurrence en neurosciences. Merci donc aux collègues compétents d’apporter des précisions ou corrections éventuelles, ou des références pertinentes.

    Quelques remarques préliminaires sur les statistiques

    Avant de commencer les explications, je voudrais tout d’abord inviter le lecteur à la prudence quant à l’interprétation des statistiques, en particulier des statistiques de mortalité. A l’heure où j’écris ces lignes, on estime qu’environ 15% de la population française a été infectée. Autrement dit, l’épidémie est débutante. Les statistiques de mortalité ne sont donc pas un « bilan » de l’épidémie, mais des statistiques sur une épidémie en cours. Comparer avec le bilan d’épidémies passées, ou d’autres causes de mortalité, n’a donc pas beaucoup de sens (éventuellement, on pourrait multiplier par 5 ces statistiques pour avoir un ordre d’idée).

    Deuxièmement, la mortalité d’une maladie ne dépend pas que du virus. Elle dépend aussi de la personne malade. Un facteur majeur est l’âge, et il faut donc prendre cela en compte lorsque l’on compare des pays de démographies très différentes. En première approximation, le risque de décès de la Covid-19 augmente avec l’âge de la même manière que le risque de décès hors Covid. On peut voir cela comme le fait que les jeunes sont peu à risque, ou bien que toutes les classes d’âge voient leur risque de décès dans l’année augmenter d’un même facteur. Quoi qu’il en soit, avec ce type de profil de mortalité, l’âge moyen ou médian au décès n’est pas très informatif puisqu’il est le même avec et sans infection.

    Troisièmement, la mortalité d’une maladie dépend aussi de la prise en charge. En l’occurrence, la Covid-19 se caractérise par un fort taux d’hospitalisation et de réanimation. La mortalité observée en France jusqu’à présent correspond à celle d’un système de soin qui n’est pas saturé. Naturellement, elle serait bien plus élevée si l’on ne pouvait pas procurer ces soins, c’est-à-dire si l’épidémie n’était pas contrôlée, et la mortalité se déplacerait également vers une population plus jeune.

    Enfin, il va de soi que la gravité d’une maladie ne se réduit pas à sa mortalité. Une hospitalisation n’est généralement pas anodine, et les cas moins sévères peuvent avoir des séquelles à long terme.

    Le taux de reproduction

    Un virus est une entité qui peut se répliquer dans un hôte et se transmettre à d’autres hôtes. Contrairement à une bactérie qui est une cellule, un virus n’est pas à proprement parler un organisme, c’est-à-dire qu’il dépend totalement de l’hôte pour sa survie et sa reproduction. Par conséquent, pour comprendre la dynamique d’une épidémie virale, il faut s’intéresser au nombre d’hôtes infectés et à la transmission entre hôtes.

    Un paramètre important est le taux de reproduction (R). C’est le nombre moyen de personnes qu’une personne infectée va contaminer. On voit que l’épidémie se développe si R>1, et s’éteint si R<1. A chaque transmission, le nombre de cas est multiplié par R. Ce taux de reproduction ne dit pas à quelle vitesse l’épidémie se développe, car cela dépend du temps d’incubation et de la période de contagiosité. C’est en fait un paramètre qui est surtout intéressant pour comprendre le contrôle de l’épidémie. Par exemple, si R = 2, alors on peut contrôler l’épidémie en divisant par deux le nombre de ses contacts.

    Comme le nombre de cas est multiplié par un certain facteur à chaque épisode de contamination, une épidémie a typiquement une dynamique exponentielle, c’est-à-dire que c’est le nombre de chiffres qui augmente régulièrement. Cela prend autant de temps de passer de 10 à 100 cas que de 100 à 1000 cas, ou de 1000 à 10 000 cas. La dynamique est donc de nature explosive. C’est pourquoi la quantité à suivre avec attention n’est pas tant le nombre de cas que ce taux de reproduction : dès que R>1, le nombre de cas peut rapidement exploser et il faut agir vite.

    Naturellement, ce raisonnement suppose que la population n’a pas été déjà infectée. Si une proportion p de la population est immunisée (infectée ou vaccinée), alors chaque personne infectée va contaminer en moyenne un nombre de personnes R x (1-p). L’épidémie va donc s’arrêter quand ce nombre descend en-dessous de 1, c’est-à-dire p > 1- 1/R. Par exemple, avec R = 3, l’épidémie s’arrête quand les 2/3 de population sont immunisés.

    Ceci nous dit également l’impact de la vaccination en cours sur le contrôle de l’épidémie. Par exemple, à l’heure où j’écris ces lignes (22 février 2021), environ 2% de la population a été vaccinée (4% a reçu la première dose). Cela contribue donc à diminuer R de 2% (par exemple de 1.1 à 1.08). Il est donc clair que la vaccination n’aura pas d’effet important sur la dynamique globale avant plusieurs mois.

    Il est important de comprendre que ce taux de reproduction n’est pas une caractéristique intrinsèque du virus. Il dépend du virus, mais également de l’hôte qui peut être plus ou moins contagieux (on a parlé par exemple des « superspreaders »), d’aspects comportementaux, de mesures de protection (par exemple les masques). Ce taux n’est donc pas forcément homogène dans une population. Par exemple, il est vraisemblable que le R soit supérieur chez les jeunes actifs que chez les personnes âgées.

    Peut-on isoler une partie de la population ?

    Est-il possible de préserver la population la plus fragile en l’isolant du reste de la population, sans contrôler l’épidémie ? Cette hypothèse a été formulée plusieurs fois, bien que très critiquée dans la littérature scientifique.

    On peut comprendre assez facilement que c’est une idée périlleuse. Isoler une partie de la population a un impact quasi nul sur le taux de reproduction R, et donc la dynamique de l’épidémie est inchangée. Il faut bien garder à l’esprit que contrôler une épidémie pour qu’elle s’éteigne suppose simplement de faire en sorte que R<1, de façon à ce que le nombre de cas décroisse exponentiellement. Ainsi pendant le confinement strict de mars 2020, le taux était d’environ R = 0.7. C’est suffisant pour que l’épidémie s’éteigne, mais il n’en reste pas moins qu’une personne infectée continue à contaminer d’autres gens. Par conséquent, à moins de parvenir à isoler ces personnes fragiles beaucoup plus strictement que lors du premier confinement (ce qui semble douteux étant donné qu’il s’agit pour partie de personnes dépendantes), l’épidémie dans cette population va suivre l’épidémie dans la population générale, avec la même dynamique mais dans une version un peu atténuée. Autrement dit, il semble peu plausible que cette stratégie soit efficace.

    Les variants

    Un virus peut muter, c’est-à-dire que lorsqu’il se réplique dans un hôte, des erreurs sont introduites de sorte que les propriétés du virus changent. Cela peut avoir un impact sur les symptômes, ou sur la contagiosité. Naturellement, plus il y a d’hôtes infectés, plus il y a de variants, c’est donc un phénomène qui survient dans des épidémies non contrôlées.

    Supposons que R = 2 et qu’un variant ait un taux R = 4. Alors à chaque transmission, le nombre de cas du variant relatif au virus original est multiplié par 2. Au bout de 10 transmissions, le variant représente 99.9% des cas. Ceci reste vrai si des mesures restrictives réduisent la transmission (par exemple R=2/3 et R=4/3). Après ces 10 transmissions, le R global est celui du variant. Par conséquent, c’est le nombre de cas et le R du variant plus contagieux qui déterminent le nombre de cas et la dynamique à moyen terme (c’est-à-dire quelques semaines). Le nombre de cas du virus original et même le nombre de cas globaux sont essentiellement insignifiants.

    Cela signifie que l’on peut être dans une dynamique explosive alors même que le nombre de cas diminue. Pour savoir si l’épidémie est sous contrôle, il faut regarder le R du variant le plus contagieux. A l’heure où j’écris, on est précisément dans la situation où le virus original est encore dominant avec R<1 et les variants ont un R>1, ce qui signifie que malgré une diminution globale des cas, on est dans une dynamique explosive qui sera apparente dans le nombre global de cas lorsque les variants seront dominants.

    Le contrôle épidémique

    Contrôler l’épidémie signifie faire en sorte que R<1. Dans cette situation, le nombre de cas diminue exponentiellement et s’approche de 0. Il ne s’agit pas nécessairement de supprimer toute transmission mais de faire en sorte par une combinaison de mesures que R soit plus petit que 1. Ainsi, passer de R = 1.1 à 0.9 suffit pour passer d’une épidémie explosive à une extinction de l’épidémie.

    Naturellement, la mesure la plus sûre pour éteindre l’épidémie est d’empêcher toute relation sociale (le « confinement »). Mais il existe potentiellement de nombreuses autres mesures, et idéalement il s’agit de combiner plusieurs mesures à la fois efficaces et peu contraignantes, le confinement pouvant être utilisé en dernier recours lorsque ces mesures ont échoué. La difficulté est que l’impact d’une mesure n’est pas précisément connu pour un virus nouveau.

    Ces connaissances sont cependant loin d’être négligeables après un an d’épidémie de Covid-19. On sait par exemple que le port du masque est très efficace (on s’en doutait déjà, étant donné que c’est une infection respiratoire). On sait que le virus se propage par projection de gouttelettes et par aérosols. On sait également que les écoles et les lieux de restauration collective sont des lieux importants de contamination. Cette observation peut conduire à fermer ces lieux, mais on pourrait alternativement les sécuriser par l’installation de ventilation et de filtres (investissement qui pourrait d’ailleurs être synergique avec un plan de rénovation énergétique).

    Il y a deux grands types de mesures. Il y a des mesures globales qui s’appliquent aux personnes saines comme aux porteurs du virus, comme le port du masque, la fermeture de certains lieux, la mise en place du télétravail. Le coût de ces mesures (au sens large, c’est-à-dire le coût économique et les contraintes) est fixe. Il y a des mesures spécifiques, c’est-à-dire qui sont déclenchées lorsqu’il y a un cas, comme le traçage, la fermeture d’une école, le confinement local. Ces mesures spécifiques ont un coût proportionnel au nombre de cas. Le coût global est donc une combinaison d’un coût fixe et d’un coût proportionnel au nombre de cas. Par conséquent, il est toujours plus coûteux de maîtriser une épidémie lorsque le nombre de cas est plus grand (choix qui semble pourtant avoir été fait en France après le deuxième confinement).

    Le « plateau »

    Une remarque importante : les mesures ont un impact sur la progression de l’épidémie (R) et non directement sur le nombre de cas. Cela signifie que si l’on sait maintenir un nombre de cas haut (R=1), alors on sait tout aussi bien (avec les mêmes mesures), maintenir un nombre de cas bas. Avec un petit effort supplémentaire (R=0.9), on peut supprimer l’épidémie.

    Avoir comme objectif la saturation hospitalière n’a donc pas particulièrement d’intérêt, et est même un choix plus coûteux que la suppression. Il existe une justification à cet objectif, la stratégie consistant à « aplatir la courbe », qui a été suggérée au début de l’épidémie. Il s’agit de maximiser le nombre de personnes infectées de façon à immuniser rapidement toute la population. Maintenant qu’il existe un vaccin, cette stratégie n’a plus beaucoup de sens. Même sans vaccin, infecter toute la population sans saturer les services hospitaliers prendrait plusieurs années, sans parler naturellement de la mortalité.

    La suppression de l’épidémie

    Comme remarqué précédemment, il est plus facile de maîtriser une épidémie faible que forte, et donc une stratégie de contrôle doit viser non un nombre de cas « acceptables », mais un taux de reproduction R<1. Dans cette situation, le nombre de cas décroît exponentiellement. Lorsque le nombre de cas est très bas, il faut prendre en compte les cas importés. C’est-à-dire que sur une période de contamination, le nombre de cas va passer non plus de n à R x n mais de n à R x n + I, où I est le nombre de cas importés. Le nombre de cas va donc se stabiliser à I/(1-R) (par exemple, 3 fois le nombre de cas importés si R = 2/3). Si l’on veut diminuer encore le nombre de cas, il devient alors important d’empêcher l’importation de nouveaux cas (tests, quarantaine, etc).

    Lorsque le nombre de cas est très bas, il devient faisable d’appliquer des mesures spécifiques très poussées, c’est-à-dire pour chaque cas. Par exemple, pour chaque cas, on isole la personne, et l’on teste et on isole toutes les personnes susceptibles d’être également porteuses. Non seulement on identifie les personnes potentiellement contaminées par la personne positive, mais on recherche également la source de la contamination. En effet, si l’épidémie est portée par des événements de supercontamination (« clusters »), alors il devient plus efficace de remonter à la source de la contamination puis de suivre les cas contacts.

    A faible circulation, comme on dispose de ces moyens supplémentaires pour diminuer la transmission, il devient possible de lever certains moyens non spécifiques (par exemple le confinement général ou autres restrictions sociales, les fermetures d’établissements et même le port du masque). Pour que les moyens spécifiques aient un impact important, un point clé est que la majorité des cas puissent être détectés. Cela suppose des tests systématiques massifs, par exemple en utilisant des tests salivaires, des drive-in, des tests groupés, des contrôles de température. Cela suppose que les personnes positives ne soient pas découragées de se tester et s’isoler (en particulier, en maintenant les revenus). Cela suppose également un isolement systématique en attente de résultat pour les cas suspectés. Autrement dit, pour avoir une chance de fonctionner, cette stratégie doit être appliquée de manière la plus systématique possible. L’appliquer sur 10% des cas n’a pratiquement aucun intérêt. C’est pourquoi elle n’a de sens que lorsque la circulation du virus est faible.

    Il est important d’observer que dans cette stratégie, l’essentiel du coût et des contraintes est porté par le dispositif de test, puisque le traçage et l’isolement ne se produisent que lorsqu’un cas est détecté, ce qui idéalement arrive très rarement. Si elle demande une certaine logistique, c’est en revanche une stratégie économique et peu contraignante pour la population.

    Quand agir ?

    J’ai expliqué que maintenir un niveau élevé de cas est plus coûteux et plus contraignant que maintenir un niveau faible de cas. Maintenir un niveau très faible de cas est encore moins coûteux et contraignant, bien que cela demande plus d’organisation.

    Bien entendu, pour passer d’un plateau haut à un plateau bas, il faut que l’épidémie décroisse, et donc transitoirement appliquer des mesures importantes. Si l’épidémie n’est pas contrôlée – et je rappelle que cela est le cas dès lors qu’un variant est en croissance (R>1) même si le nombre global de cas décroît – ces mesures vont devoir être appliquées à un moment donné. Quand faut-il les appliquer ? Est-il plus avantageux d’attendre le plus possible avant de le faire ?

    Ce n’est clairement jamais le cas, car plus on attend, plus le nombre de cas augmente et donc plus les mesures restrictives devront être appliquées longtemps avant d’atteindre l’objectif de circulation faible, où des mesures plus fines (traçage) pourront prendre le relais. Cela peut sembler contre-intuitif si le nombre de cas est en décroissance mais c’est pourtant bien le cas, parce que le nombre de cas à moyen terme ne dépend que du nombre de cas du variant le plus contagieux, et non du nombre de cas global. Donc, si le variant le plus contagieux est en expansion, attendre ne fait qu’allonger la durée des mesures restrictives.

    De combien ? Supposons que le nombre de cas du virus (le variant le plus contagieux) double chaque semaine, et que les mesures restrictives divisent le nombre de cas par 2 en une semaine. Alors attendre une semaine avant de les appliquer allongent ces mesures d’une semaine (j’insiste : allongent, et non simplement décalent). Dans l’hypothèse (plus réaliste) où les mesures sont un peu moins efficaces, chaque semaine d’attente augmente la durée des mesures d’un peu plus d’une semaine.

    Il est donc toujours préférable d’agir dès que R>1, de façon à agir le moins longtemps possible, et non pas d’attendre que le nombre de cas augmente considérablement. La seule justification possible à l’attente pourrait être une vaccination massive qui laisserait espérer une décroissance de l’épidémie par l’immunisation, ce qui n’est manifestement pas le cas dans l’immédiat.

     

    Quelques liens pertinents:

    in Romain Brette on February 22, 2021 03:45 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How can we increase adoption of open research practices?

    This blog was written by Iain Hrynaszkiewicz, Director of Open Research Solutions for PLOS.

    Researchers are satisfied with their ability to share their own research data but may struggle with accessing other researchers’ data – according to PLOS research released as a preprint this week. Therefore, to increase data sharing in a findable and accessible way, PLOS will focus on better integrating existing data repositories and promoting their benefits rather than creating new solutions. We also call on the scholarly publishing industry to improve journal data sharing policies to better support researchers’ needs.

    PLOS has long supported Open Science with our data sharing policy. Our authors are far more likely to provide information about publicly available data compared to journals with less stringent policies. But best practice for data sharing – use of data repositories – is observed in less than 30% of PLOS publications. To help us understand if there are opportunities for new solutions to help improve adoption of best practice, we built on previous research into the frequency of problems associated with sharing research data. We investigated the importance researchers attach to different tasks associated with data sharing, and researchers’ satisfaction with their ability to complete these tasks.

    Through a survey conducted in 2020, which received 728 completed responses, we found that tasks relating to research impact, funder policy compliance, and credit had the highest importance scores. Tasks associated with funder, journal, and institutional policy compliance – including preparation of Data Management Plans (DMPs) – received high satisfaction scores from researchers, on average.

    52% of respondents reuse research data but the average satisfaction score for obtaining data for reuse – such as accessing data from journal articles or making requests for data from other individuals – was relatively low. Tasks associated with sharing data were rated somewhat important and respondents were reasonably well satisfied with their ability to accomplish them.

    Figure: When we plot mean importance and satisfaction score, respondents were on average satisfied with their ability to complete the majority of tasks associated with Data Preparation, Data Publishing and Reuse of their own data but dissatisfied with their ability to complete tasks associated with Reuse of other researchers’ data. Tasks associated with meeting policy requirements are both important and well satisfied.

    What are the implications?

    We presume that researchers are unlikely to seek new solutions to a problem or task that they are satisfied in their ability to accomplish. This implies there are few opportunities for new solutions to meet researcher needs for data sharing – at least in our cohort, which consisted mostly of PLOS authors. PLOS – and other publishers – can likely meet these needs for data sharing by working to seamlessly integrate existing solutions that reduce the effort involved in some tasks, and focusing on advocacy and education around the benefits of sharing data in a Findable, Accessible, Interoperable and Reusable (FAIR) manner. 

    The challenges that researchers have reusing data could be addressed in part by strengthening journal data sharing policies – such as only permitting “data available on request” when there are legal or ethical restrictions on sharing, and improving the links between articles and supporting datasets. Generic “data available on request” statements in publications, which are not permitted under PLOS’s policies, usually mean data will not be available.

    While our research revealed a “negative result” with respect to new solution opportunities, the results are informative for how PLOS can best meet known researcher needs. This includes more closely partnering with established data repositories and improving the linking of research data and publications. These are important parts of our plans to support adoption of Open Science in 2021 and beyond.

    Read the preprint here and access the survey dataset and survey instrument here.

    The post How can we increase adoption of open research practices? appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 22, 2021 03:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Seeking the secrets of the universe in the particle

    An award-winning particle physicist in Guatemala hopes to convince society that basic science matters too

    in Elsevier Connect on February 19, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: Caglar Cakan: neurolib

    neurolib

    Caglar Cakan will introduce neurolib and discuss its development in this developer session.

    The abstract for the talk is below:

    neurolib is a computational framework for whole-brain modelling written in Python. It provides a set of neural mass models that represent the average activity of a brain region on a mesoscopic scale. In a whole-brain network model, brain regions are connected with each other based on structural connectivity data, i.e. the connectome of the brain. neurolib can load structural and functional data sets, set up a whole-brain model, manage its parameters, simulate it, and organize its outputs for later analysis. The activity of each brain region can be converted into a simulated BOLD signal in order to calibrate the model to empirical data from functional magnetic resonance imaging (fMRI). Extensive model analysis is possible using a parameter exploration module, which allows to characterize the model’s behaviour given a set of changing parameters. An optimization module allows for fitting a model to multimodal empirical data using an evolutionary algorithm. Besides its included functionality, neurolib is designed to be extendable such that custom neural mass models can be implemented easily. neurolib offers a versatile platform for computational neuroscientists for prototyping models, managing large numerical experiments, studying the structure-function relationship of brain networks, and for in-silico optimization of whole-brain models.

    in INCF/OCNS Software Working Group on February 18, 2021 09:19 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The dynamism of clinical knowledge

    How can we meet clinicians’ knowledge needs in a rapidly evolving medical world?

    in Elsevier Connect on February 17, 2021 02:42 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How good science communication can cut through the COVID “madness”

    Vaccine Editor-in-Chief talks about why people are complacent about COVID – and how we can help them take it seriously

    in Elsevier Connect on February 16, 2021 10:26 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    PLOS and Uppsala University Announce Publishing Deal

    Uppsala University and the Public Library of Science (PLOS) today announced two 3-year publishing agreements that allow researchers to publish in PLOS journals without incurring article processing charges (APC). The Community Action Publishing (CAP) agreement enables Uppsala University researchers to publish fee-free in PLOS Medicine and PLOS Biology. The Flat fee agreement also allows them to publish in PLOS ONE and PLOS’ community journals[1]. These models shift publishing costs from authors to research institutions based on prior publishing history and anticipated future growth with PLOS.

    “We are thrilled to be collaborating with Uppsala University, our first European flat fee customer, on two new models that allow Uppsala researchers to publish without APCs,” said Sara Rouhi, Director of Strategic Partnerships for PLOS.  “As one of the most prestigious university’s in the world, Uppsala is well-positioned to further the cause of equitable and barrier free open reading and publishing and we are delighted to join with them on this effort.”

    PLOS’ Community Action Publishing (CAP) is PLOS’ effort to sustain highly selective journal publishing without Article Processing Charges for authors. More details about the model can be found here and here. PLOS’ Flat Fee model enables APC-free publishing with PLOS’ other five journals creating efficiency and reducing administrative overhead for managing gold APC funds. Uppsala University and PLOS will also collaborate on future data, metrics, and tools for institutions to evaluate Open Access publishing agreements.

    The Uppsala University publishing deal continues the momentum for PLOS, following other agreements with the University of California system, Big Ten Academic Alliance, Jisc (including University College London, Imperial College London, University of Manchester) and the Canadian Research Knowledge Network among others.


    [1] PLOS Computational Biology, PLOS Genetics, PLOS Neglected Tropical Diseases, and PLOS Pathogens.

    The post PLOS and Uppsala University Announce Publishing Deal appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 15, 2021 03:15 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 15 February 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please join us at the next regular Open NeuroFedora team meeting on Monday 15 February at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'
    

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on February 15, 2021 12:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 1 February 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please join us at the next regular Open NeuroFedora team meeting on Monday 1 February at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'
    

    The meeting will be chaired by @bt0dotninja. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on February 15, 2021 12:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 18 January 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please join us at the next regular Open NeuroFedora team meeting on Monday 18 January at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 next Monday'
    

    The meeting will be chaired by @ankursinha (me). The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on February 15, 2021 12:11 PM.