• Wallabag.it! - Save to Instapaper - Save to Pocket -

    Solar Silliness: The Heart-Sun Connection

    On Twitter, I learned about a curious new paper in Scientific Reports: Long-Term Study of Heart Rate Variability Responses to Changes in the Solar and Geomagnetic Environment by Abdullah Alabdulgader and colleagues. According to this article, the human heart "responds to changes in geomagnetic and solar activity". This paper claims that things like solar flares, cosmic rays and sunspots affect the beating of our hearts. Spoiler warning: I don't think this is true. In fact, I think the

    in Discovery magazine - Neuroskeptic on March 22, 2018 08:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Atacama mummy’s deformities were unduly sensationalized

    A malformed human mummy known as Ata has been sensationalized as alien. A DNA analysis helps overturn that misconception.

    in Science News on March 22, 2018 06:54 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How bees defend against some controversial insecticides

    Some bees have enzymes that allow them to resist toxic compounds in some neonicotinoid pesticides.

    in Science News on March 22, 2018 06:41 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Earwigs take origami to extremes to fold their wings

    Stretchy joints let earwig wings flip quickly between folded and unfurled.

    in Science News on March 22, 2018 06:10 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Journal retracts study linking “gut makeover” to weight loss, improved health

    Over the objections of the authors, PLOS ONE has retracted a paper linking a diet designed to restore healthy gut bacteria to weight loss and other benefits. The study, published in June 2017, claimed to show that a “Microbiome restoration diet improves digestion, cognition and physical and emotional wellbeing.” The diet was one championed by … Continue reading Journal retracts study linking “gut makeover” to weight loss, improved health

    in Retraction watch on March 22, 2018 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Why it’s great to have a geologist in the house

    Editor in Chief Nancy Shute enthuses about learning how ancient plans may have helped make Earth muddy.

    in Science News on March 22, 2018 02:19 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Readers ponder children’s pretend play, planetary dust storms and more

    Readers had questions about children’s fantasy play, lasers creating 3-D images and dust storms on Mars.

    in Science News on March 22, 2018 02:18 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The great Pacific garbage patch may be 16 times as massive as we thought

    The giant garbage patch between Hawaii and California weighs at least 79,000 tons, a new estimate suggests.

    in Science News on March 22, 2018 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Figures in cancer paper at root of newly failed compound called into question

    How much role did a potentially problematic paper play in the demise of a once-promising compound? Researchers are questioning the validity of a high-profile article, published by Nature in 2006. Although the letter is 12 years old, the concerns have current implications: It was among the early evidence used to develop a cancer compound that … Continue reading Figures in cancer paper at root of newly failed compound called into question

    in Retraction watch on March 22, 2018 01:15 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How oral vaccines could save Ethiopian wolves from extinction

    A mass oral vaccination program in Ethiopian wolves could pave the way for other endangered species and help humans, too.

    in Science News on March 22, 2018 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Nerd Food: Neurons for Computer Geeks - Part IV: More Electricity

    Nerd Food: Neurons for Computer Geeks - Part IV: More Electricity

    Part I of this series looked at a neuron from above; Part II attempted to give us the fundamental building blocks in electricity required to get us on the road to modeling neurons. We did a quick interlude with a bit of coding in part III but now, sadly, we must return to boring theory once more.

    Now that we grok the basics of electricity, we need to turn our attention to the RC circuit. As we shall see, this circuit is of particular interest when modeling neurons. The RC circuit is so called because it is a circuit, and it is composed of a Resistor and a Capacitor. We've already got some vague understanding of circuits and resistors, so lets start by having a look at this new crazy critter, the capacitor.


    Just like the battery is a source of current, one can think of the capacitor as a temporary store of current. If you plug a capacitor into a circuit with just a battery, it will start to "accumulate" charge over time, up to a "maximum limit". But how exactly does this process work?

    In simple terms, the capacitor is made up of two metal plates, one of which will connect to the positive end of the battery and another which connects to the negative end. At the positive end, the metal plate will start to lose negative charges because these are attracted to the positive end of the battery. This will make this metal plate positively charged. Similarly, at the negative end, the plate will start to accumulate negative charges. This happens because the electrons are repelled by the negative end of the battery. Whilst this process is taking place, the capacitor is charging.

    At some point, the process reaches a kind of equilibrium, whereby the electrons in the positively charged plate are attracted equally to the plate as they are to the positive end of the battery, and thus stop flowing. At this point we say the capacitor is charged. It is interesting to note that both plates of the capacitor end up with the same "total" charge but different signs (i.e. -q and +q).


    We mentioned a "maximum limit". A few things control this limit: how big the plates are, how much space there is between them and the kind of material we place between them, if any. The bigger the plates and the closer they are - without touching - the more you can store in the capacitor. The material used for the plates is, of course, of great importance too - it must be some kind of metal good at conducting.

    In a more technical language, this notion of a limit is captured by the concept of capacitance, and is given by the following formula:

    \begin{align} C = \frac{q}{V} \end{align}

    Lets break it down to its components to see what the formula is trying to tell us. The role of V is to inform us about the potential difference between the two plates. This much is easy to grasp; since one plate is positively charged and other negatively charged, it is therefore straightforward to imagine that a charge will have a different electric potential in each plate, and thus that there will be an electric potential difference between them. q tells us about the magnitude of the charges that we placed on the plates - i.e. ignoring the sign. It wouldn't be to great a leap to conceive that plates with a larger surface area would probably have more "space" for charges and so a larger q - and vice-versa.

    Capacitance is then the ratio between these two things; a measure of how much electric charge one can store for a given potential difference. It may not be very obvious from this formula, but capacitance is constant. That is to say, a given capacitor has a capacitance, influenced by the properties described above. This formula does not describe the discharging or charging process - but of course, capacitance is used in the formulas that describe those.

    Capacitance is measured in SI units of Farads, denoted by the letter F. A farad is 1 coulomb over 1 volt:

    \begin{align} 1F = \frac{C}{V} \end{align}

    Capacitors and Current

    After charging a capacitor, one may be tempted to discharge it. For that one could construct a simple circuit with just the capacitor. Once the circuit is closed, the negative charges will start to flow to the positively charged plate, at full speed - minus the resistance of the material. Soon enough both plates would be made neutral. At first glance, this may appear to be very similar to our previous circuit with a battery. However, there is one crucial difference: the battery circuit had a constant voltage and a constant current (for a theoretical battery) whereas a circuit with a discharging capacitor has voltage and current that decay over time. By "decaying", all we really mean is that we start at some arbitrarily high value and we move towards zero over a period of time. This makes intuitive sense: you cannot discharge the capacitor forever; and, as you discharge it, the voltage starts to decrease - for there are less charges in the plates and so less potential difference - and similarly, so does the current - for there is less "pressure" to make the charges flow.

    This intuition is formally captured by the following equation:

    \begin{align} I(t) = C \frac{dV(t)}{dt} \end{align}

    I'm rather afraid that, at this juncture, we have no choice but to introduce Calculus. A proper explanation of Calculus a tad outside the remit of these posts, so instead we will have to make do with some common-sense but extremely hand-waved interpretations of the ideas behind it. If you are interested in a light-hearted but still comprehensive treatment of the subject, perhaps A Gentle Introduction To Learning Calculus may be to your liking.

    Let's start by taking a slightly different representation of the formula above and then compare these two formulas.

    \begin{align} i = C \frac{dv}{dt} \end{align}

    In the first case we are talking about the current I, which normally is some kind of average current over some unspecified period. Up to now, time didn't really matter - so we got away with just talking about I in these general terms. This was the case with the Ohm's Law in part II. However, as we've seen, it is not so with capacitors - so we need to make the current specific to a point in time. For that we supply an "argument" to I - I(t); here, a mathematician would say that that I is a function of time. In the second case, we make use of i, which is the instantaneous current through the capacitor. The idea is that, somehow, we are able to know - for any point in time - what the instantaneous current is.

    How we achieve that is via the magic of Calculus. The expression dv/dt in the second formula provides us with the instantaneous rate of change of the voltage over time. The same notion can be applied to V, as per first formula.

    These formulas may sound awfully complicated, but what they are trying to tell us is that the capacitor's current has the following properties:

    • it varies as a "function" of time; that is to say, different time points have different currents. Well, that's pretty consistent with our simplistic notion of a decaying current.
    • it is "scaled" by the capacitor's capacitance C; "bigger" capacitors can hold on to higher currents for longer when compared to "smaller" capacitors.
    • the change in electric potential difference varies as a function of time. This is subtle but also makes sense: we imagined some kind of decay for our voltage, but there was nothing to say the decay would remain constant until we reached zero. This formula tells us it does not; voltage may decrease faster or slower at different points in time.

    Circuits: Parallel and Series

    The RC circuit can appear in a parallel or series form, so its a good time to introduce these concepts. One way we can connect circuits is in series; that is, all components are connected along a single path, such that the current flows through all of them, one after the other. If any component fails, the flow will cease.

    This is best understood by way of example. Lets imagine the canonical example of a battery - our old friend the 1.5V AA battery - and a three small light bulbs. A circuit that connects them in series would be made up of a cable segment plugged onto one of the battery's terminals - say +, then connected to the first light bulb. A second cable segment would then connect this light bulb to another light bulb, followed by another segment and another light bulb. Finally, a cable segment would connect the light build to the other battery terminal - say -. Graphically - and pardoning my inability to use Dia to create circuit diagrams - it would look more or less like this:


    Figure 1: Series circuit. Source: Author

    This circuit has a few interesting properties. First, if any of the light bulbs fail, all of them will stop working because the circuit is no longer closed. Second, if one were to add more and more light bulbs, the brightness of each light bulb will start to decrease. This is because each light bulb is in effect a resistor - the light shining being a byproduct of said resistance - and so they are each decreasing the current. So it is that in a series circuit the total resistance is given by the sum of all individual resistances, and the current is the same for all elements.

    Parallel circuits are a bit different. The idea is that two or more components are connected to the circuit in parallel, i.e. there are two or more paths along which the current can flow at the same time. So we'd have to modify our example to have a path to each of the light bulbs which exists in parallel to the main path - quite literally a segment of cable that connects the other segments of cable, more or less like so:


    Figure 2: Parallel circuit. Source: Author

    Here you can see that if a bulb fails, there is still a closed loop in which current can flow, so the other bulbs should be unaffected. This also means that the voltage is the same for all components in the circuit. Current and resistance are now "relative" to each component, and it is possible to compute the overall current for the circuit via Kirchhoff's Current Law. Simplifying it, it means that the current for the circuit is the sum of all currents flowing through each component.

    This will become significant later on when we finally return to the world of neurons.

    The RC Circuit

    With all of this we can now move to the RC circuit. In its simplest form, the circuit has a source of current with a resistor and a capacitor:


    Figure 3: Source: Wikipedia, RC circuit

    Let's try to understand how the capacitor's voltage will behave over time. This circuit is rather similar to the one we analysed when discussing capacitance, with the exception that we now have a resistor as well. But in order to understand this, we must return to Kirchhoff's current law, which we hand-waved a few paragraphs ago. Wikipedia tells us that:

    The algebraic sum of currents in a network of conductors meeting at a point is zero.

    One way to understand this statement is to think that the total quantity of current entering a junction point must be identical to the total quantity leaving that junction point. If we consider entering to be positive and leaving to be negative, that means that adding the two together must yield zero.

    Because of Kirchhoff's law, we can state that, for the positive terminal of the capacitor:

    \begin{align} i_c(t) + i_r(t) = 0 \end{align}

    That is: at any particular point in time t, the current flowing through the capacitor added to the current flowing through the resistor must sum to zero. However, we can now make use of the previous formulas; after all, our section on capacitance taught us that:

    \begin{align} i_c(t) = C \frac{dv(t)}{dt} \end{align}

    And making use of Ohm's Law we can also say that:

    \begin{align} i_r(t) = \frac{v(t)}{R} \end{align}

    So we can expand the original formula to:

    \begin{align} C \frac{dv(t)}{dt} + \frac{v(t)}{R} \end{align}


    \begin{align} C \frac{dV}{dt} + \frac{V}{R} \end{align}

    I'm not actually going to follow the remaining steps to compute V, but you can see them here and they are fairly straighforward, or at least as straightforward as calculus gets. The key point is, when you solve the differential equation for V, you get:

    \begin{align} V(t) = V_0e^{-\frac{t}{RC}} \end{align}

    With V0 being voltage when time is zero. This is called the circuit's natural response. This equation is very important. Note that we are now able to describe the behaviour of voltage over time with just a few inputs: the starting voltage, the time, the resistance and the capacitance.

    A second thing falls off of this equation: the RC Time constant, or τ. It is given by:

    \begin{align} \tau = RC \end{align}

    The Time Constant is described in a very useful way in this page, so I'll just quote them and their chart here:

    The time required to charge a capacitor to 63 percent (actually 63.2 percent) of full charge or to discharge it to 37 percent (actually 36.8 percent) of its initial voltage is known as the TIME CONSTANT (TC) of the circuit.


    Figure 4: The RC Time constant. Source: Concepts of alternating current

    What next?

    Now we understand the basic behaviour of the RC Circuit, together with a vague understanding of the maths that describe it, we need to return to the neuron's morphology. Stay tuned.

    Created: 2015-09-05 Sat 18:56

    Emacs 24.5.1 (Org mode 8.2.10)


    in Marco Craveiro on March 22, 2018 10:48 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Nerd Food: Neurons for Computer Geeks - Part VI: LIF At Long Last!

    Nerd Food: Neurons for Computer Geeks - Part VI: Integrate and Fire!

    Welcome to part VI of a multi-part series on modeling neurons. In part V we added a tad more theory to link electricity with neurons, and also tried to give an idea of just how complex neurons are. Looking back on that post, I cannot help but notice I skipped one bit that is rather important to understanding Integrate-and-Fire (IAF) models. So lets look at that first and then return to our trail.

    Resting Potential and Action Potential

    We have spoken before about the membrane potential and the resting membrane potential, but we did so with such a high degree of hand-waving it now warrants revisiting. When we are talking about the resting membrane potential we mean just that - the value for the membrane potential when nothing much is happening. That is the magical circa -65mv we discussed before - with all of the related explanations on conventions around negative voltages. However, time does not stand still and things happen. The cell receives input from other neurons, and this varies over time. Some kinds of inputs can cause events to trigger on the receiving neuron: active ion channels may get opened or shut, ions move around, concentrations change and so forth, and thus, the cell will change its membrane potential in response. When these changes result in a higher voltage - such as moving to -60mv - we say a depolarisation is taking place. Conversely, when the voltage becomes more negative, we say hyperpolarisation is occurring.

    Now, it may just happen that there is a short-lived but "strong" burst of depolarisation, followed by equally rapid hyperpolarisation - and, as a result of which, the Axon's terminal decides to release neurotransmitters into the synapse (well, into the synaptic gap or synaptic cleft to be precise). This is called an action potential, and it is also known by many other names such as "nerve impulses" or "spikes". When you hear that "a neuron has fired" this means that an action potential has just been emitted. If you record the neuron's behaviour over time you will see a spike train - a plot of the voltage over time, clearly showing the spikes. Taking a fairly random example:


    Figure 1: Source: Wikipedia, Neural oscillation

    One way of picturing this is as a kind of "chain-reaction" whereby something triggers the voltage of the neuron to rise, which triggers a number of gates to open, which then trigger the voltage to rise and so on, until some kind of magic voltage threshold is reached where the inverse occurs: the gates that were causing the voltage to rise shut and some other gates that cause the voltage to decrease open, and so on, until we fall back down to the resting membrane potential. The process feeds back on itself, first as a positive feedback and then as a negative feedback. In the case of the picture above, something else triggers us again and again, until we finally come to rest.

    This spiking or firing behaviour is what we are trying to model.

    Historical Background

    As it happens, we are not the first ones to try to do so. A couple of years after Einstein's annus mirabilis, a french chap called Louis Lapicque was also going through his own personal moment of inspiration, the output of which was the seminal Recherches quantitatives sur l'excitation électrique des nerfs traitée comme une polarisation. It is summarised here in fairly accessible English by Abbot.

    Lapicque had the insight of imagining the neuron as an RC circuit, with the membrane potential's behaviour explained as the interplay between capacitor and resistor; the action potential is then the capacitor reaching a threshold followed by a discharge. Even with our faint understanding of the subject matter, one cannot but appreciate Lapique's brilliance to have the ability to reach these conclusions in 1907. Of course, he also had to rely on the work of many others to get there, let's not forget.

    This model is still considered a useful model today, even though we know so much more about neurons now - a great example of what we mentioned before in terms of the choices of the level of detail when modeling. Each model is designed for a specific purpose and it should be as simple as possible for the stated end (but no simpler). As Abbotsays:

    While Lapicque, because of the limited knowledge of his time, had no choice but to model the action potential in a simple manner, the stereotypical character of action potentials allows us, even today, to use the same approximation to avoid computation of the voltage trajectory during an action potential. This allows us to focus both intellectual and computation resources on the issues likely to be most relevant in neural computation, without expending time and energy on modeling a phenomenon, the generation of action potentials, that is already well understood.

    The IAF Family

    Integrate-and-Fire is actually a family of models - related because all of them follow Lapicque's original insights. Over time, people have addressed shortcomings in the model by adding more parameters and modifying it slightly and from this other models were born.

    In general, models in the IAF family are single neuron models with a number of important properties (as per Izhikevich):

    • The spikes are all or none; that is, we either spike or we don't. This is a byproduct of the way spikes are added to the model, as we shall see later. This also means all spikes are identical because the are all created the same way.
    • The threshold for the spike is well defined and there is no ambiguity as to whether the neuron will fire or not.
    • It is possible to add a refractory period, similarly to how we add the spike. The refractory period is a time during which the neuron is less excitable (e.g. ignores inputs) and occurs right after the spike.
    • Positive currents are used as excitatory inputs and negative currents as inhibitory inputs.

    But how do the members of this family look like? We will take a few examples from Wikipedia to make a family portrait and then focus on LIF.

    IAF: Integrate-and-Fire

    This the Lapicque model. It is also called a "perfect" or "non-leaky" neuron. The formula is as follows:

    \begin{align} I(t) = C_m \frac{dV_m(t)}{dt} \end{align}

    The m's are there to signify membrane, nothing else. Note that its the job of the user to determine θ - that is the point at which the neuron spikes - and then to reset everything to zero and start again. If you are wondering why it's called "integrate", that's because the differential equation must be integrated before we can compare the current value to a threshold and then, if we're passed it, well - fire!. Hence Integrate-and-Fire.

    Wikipedia states this in a classier way, of course:

    [This formula] is just the time derivative of the law of capacitance, Q = CV. When an input current is applied, the membrane voltage increases with time until it reaches a constant threshold Vth, at which point a delta function spike occurs and the voltage is reset to its resting potential, after which the model continues to run. The firing frequency of the model thus increases linearly without bound as input current increases.

    Integrate-and-Fire with Refractory Period

    It is possible to extend IAF to take the refractory period into account. This is done by adding a period of time t ref during which the neuron does not fire.

    LIF: Leaky Integrate-and-Fire

    One of the problems of IAF is that it will "remember" stimulus, regardless of the time that elapses between stimuli. By way of example: if a neuron gets some input below the firing threshold at some time (say ta), then nothing for a long period of time and then subsequent stimulus at say tb, this will cause the neuron to fire (assuming the two inputs together are above the threshold). In the real world, neurons "forget" about below-threshold stimulus after certain amount of time has elapsed. This problem is solved in LIF by adding a leak term to IAF. The Wikipedia's formula is like so:

    \begin{align} I_m(t) - \frac{V_m(t)}{R_m} = C_m \frac{dV_m(t)}{dt} \end{align}

    We will discuss it in detail later on.

    Interlude: Leaky Integrators and Low-Pass Filters

    Update: this section got moved here from an earlier post.

    Minor detour into the world of "Leaky Integrators". As it turns out, mathematicians even have a name to describe functions like the one above: they are called Leaky Integrators. A leaky integrator is something that takes an input and "integrates" - that is, sums it over a range - but by doing so, starts "leaking" values out. In order words, a regular sum of values over a range should just result in an ever growing output. With a leaky integrator, we add up to a point, but then we start leaking, resetting the value of the sum back to where we started off.

    It turns out these kind of functions have great utility. For example, imagine that you have a range of inputs varying from some arbitrary low number to some other arbitrary high-number. When you supply these inputs to a leaky integrator, it can be used to "filter out" the high numbers; input numbers higher than a certain cut-off point just result in zeros in the output. This is known as a low-pass filter. One can conceive of a function that acted in the opposite way - a high-pass filter.

    Exponential Integrate-and-Fire

    In this model, spike generation is exponential:

    \begin{align} \frac{dX}{dt} = \Delta_\tau exp(\frac{X - X_t}{\Delta_\tau}) \end{align}

    Wikipedia explains it as follows:

    where X is the membrane potential, XT is the membrane potential threshold, and ΔT is the sharpness of action potential initiation, usually around 1 mV for cortical pyramidal neurons. Once the membrane potential crosses XT, it diverges to infinity in finite time.


    We could continue and look into other IAF models, but you get the point. Each model has limitations, and as people work through those limitations - e.g. try to make the spike trains generated by the model closer to those observed in reality - they make changes to the model and create new members of the IAF family.

    Explaining the LIF Formula

    Let's look at a slightly more familiar formulation of LIF:

    \begin{align} \tau_m \frac{dv}{dt} = -v(t) + RI(t) \end{align}

    By now this should make vague sense, but lets do it step by step breakdown just to make sure we are all on the same page. First, we know that the current of the RC circuit is defined like so:

    \begin{align} I(t) = I_R + I_C \end{align}

    From Ohm's Law we also know that:

    \begin{align} I_R = \frac {v}{R} \end{align}

    And from the rigmarole of the capacitor we also know that:

    \begin{align} I_C = C \frac{dv}{dt} \end{align}

    Thus its not much of a leap to say:

    \begin{align} I(t) = \frac {v(t)}{R} + C \frac{dv}{dt} \end{align}

    Now, if we now multiply both sides by R, we get:

    \begin{align} RI(t) = v(t) + RC \frac{dv}{dt} \end{align}

    Remember that RC is τ, the RC time constant; in this case, we are dealing with the membrane so hence the m. With that, the rest of the rearranging to the original formula should be fairly obvious.

    Also, if you recall, we mentioned Leaky Integrators before. You should hopefully be able to see the resemblance between these and our first formula.

    Note that we did not model spikes explicitly with this formula. However, when it comes to implementing it, all that is required is to look for a threshold value for the membrane potential - called the spiking threshold; when that value is reached, we need to reset the membrane potential back to a lower value - the reset potential.

    And with that we have enough to start thinking about code…

    Method in our Madness

    .. Or so you may think. First, a quick detour on discretisation. As it happens, computers are rather fond of discrete things rather than the continuous entities that inhabit the world of calculus. Computers are very much of the same opinion as the priest who said:

    And what are these same evanescent Increments? They are neither finite Quantities nor Quantities infinitely small, nor yet nothing. May we not call them the Ghosts of departed Quantities?

    So we cannot directly represent differential equations in the computer - not even the simpler ordinary differential equations (ODEs), with their single independent variable. Instead, we need to approximate them with a method for numerical integration of the ODE. Remember: when we say integration we just mean "summing".

    Once we enter the world of methods and numerical analysis we are much closer to our ancestral home of Software Engineering. The job of numerical analysis is to look for ways in which one can make discrete approximations of the problems in mathematical analysis - like, say, calculus. The little recipes they come up with are called numerical methods. A method is nothing more than an algorithm, a set of steps used iteratively. One such method is the Euler Method: "[a] numerical procedure for solving ordinary differential equations (ODEs) with a given initial value", as Wikipedia tells us, and as it happens that is exactly what we are trying to do.

    So how does the Euler method work? Very simply. First you know that:

    \begin{align} y(t_0) = y_0 \\ y'(t) = f(t, y(t)) \end{align}

    That is, at the beginning of time we have a known value. Then, for all other t's, we use the current value in f in order to be able to compute the next value. Lets imagine that our steps - how much we are moving forwards by - are of a size h. You can then say:

    \begin{align} t_{n+1} = t_n + h \\ y_{n+1} = y_n + h * f(x_n, t_n) \end{align}

    And that's it. You just need to know where you are right now, by how much you need to scale the function - e.g. the step size - and then apply the function to the current values of x and t.

    In code:

    template<typename F>
    void euler(F f, double y0, double start, double end, double h) {
    double y = y0;
    for (auto t(start); t < end; t += h) {
    y += h * f(t, y, h);

    We are passing h to the function F because it needs to know about the step size, but other than that it should be a pretty clean mapping from the maths above.

    This method is also known as Forward Euler or Explicit Euler.

    What next?

    And yet again, we run out of time yet again before we can get into serious coding. In the next instalment we shall cover the implementation of the LIF model.

    Created: 2015-09-16 Wed 18:05

    Emacs 24.5.1 (Org mode 8.2.10)


    in Marco Craveiro on March 22, 2018 10:46 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Africa generates less than 1% of the world’s research; data analytics can change that

    An in-depth analysis of the continent’s research reveals promising developments – and strategies for continued improvement

    in Elsevier Connect on March 22, 2018 09:36 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    New findings contradict headline-grabbing paper that suggested excessive small talk makes us miserable

    GettyImages-693777214.jpgBy Emma Young

    If you want to feel happier, avoid small talk and aim instead for profound conversations. That was the message the mainstream media took from a well-publicised paper published in Psychological Science in 2010 (e.g. Talk Deeply, Be Happy? asked the New York Times). But now an extension of that study, in press at the same journal (available as a pre-print), and involving two of the psychologists behind the original work, has found no evidence that how much – or little – time  you spend chatting about the weather or what you’re having for dinner will affect your life satisfaction. “The failure to replicate the original small talk effect is important as it has garnered considerable scientific and lay interest,” note the authors.

    The original work, led by Matthias Mehl at the University of Arizona, involved just 79 US college students, and given that students are surrounded by a host of potential new friends, not exactly representative of the general population in terms of daily social interactions. So Mehl, Simine Vazire at UC-Davis, and other researchers assessed the conversations of three new groups of people – 50 women who had been diagnosed with breast cancer, and their partners; 184 medically healthy adults; and 122 adults who had recently separated from their partners. These participants also completed life satisfaction and personality questionnaires.

    As in the original paper, the participants wore Electronically Activated Recorders (EARs) that regularly sampled ambient sounds – including conversations – for between 30- to 50-seconds every 9 or 12.5 minutes, depending on the specific group taking part. All participants wore the EARs between a Friday night and Monday morning, and they were allowed to review the recordings and delete any that they preferred to remain private.

    Coders listened back to the recordings and assessed both the frequency and type of conversations each participant engaged in. Small talk was defined as uninvolved, banal conversation in which only trivial information was exchanged (e.g. “I stepped on something”, “What are you up to?”); “Substantive” conversations involved the exchange of “meaningful” information (e.g. “There are a lot of high-stress A-type personalities out there” and “They have already raised ten million dollars for Haiti.”) Other types of conversations – involving gossip, say, or practical information – were also noted.

    Across the different groups, participants who were happier tended to have more conversations, with the two measures having a modest correlation. There was also a moderately positive association between engaging specifically in more substantive conversations and life satisfaction (but this was a less strong result than in the original study). Crucially, however, there was no association at all between engaging in more small talk and expressing less life satisfaction – the finding that grabbed the headlines previously. Also, contrary to one of the team’s initial predictions, how extraverted or introverted a person was also made no real difference to associations between conversation type and life satisfaction.

    Further contradicting the original small talk findings, other recent research has actually found that small talk can make people feel happier. In 2016, for example, Nicholas Epley at the University of Chicago led work finding that no matter what their positions on the extraversion spectrum, people felt more positive after engaging strangers in conversation.

    In relation to the new paper, it’s worth noting that as two of the four groups consisted of people going through major life events, these results may not be generalisable to the general population. And, as the authors themselves note in relation to the main positive finding, the data is correlational: “Whether it is the satisfied person who attracts more substantive conversations or whether having substantive conversations makes people more satisfied with their lives is still to be clarified in future (experimental) research.”

    Eavesdropping on Happiness” Revisited: A Pooled, Multi-Sample Replication of the Asso- ciation between Life Satisfaction and Observed Daily Conversation Quantity and Quality [this is a pre-print of a paper that’s in press: the final published version may be different]

    Emma Young (@EmmaELYoung) is Staff Writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 22, 2018 09:02 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    False alarms may be a necessary part of earthquake early warnings

    To give enough time to take protective action, earthquake warning systems may have to issue alerts long before it’s clear how strong the quake will be.

    in Science News on March 21, 2018 08:20 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Singular Spectrum Analysis - A very short introduction​

    Singular Spectrum Analysis (SSA) can be used for smoothing time series and extracting the trend and oscillatory components. In this talk, Yi will introduce the basic idea of SSA and explain how it works by giving a simple example.

    Date: 23/03/2018
    Time: 16:00
    Location: LB252

    in UH Biocomputation group on March 21, 2018 05:10 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Management researcher admits to falsification, resigns

    A business journal has retracted two papers after the corresponding author admitted he falsified his results. David DeGeest, an assistant professor in the Department of Management and Marketing, has also resigned from The Hong Kong Polytechnic University, a university spokesperson told Retraction Watch. Last month, DeGeest confessed to the Journal of Management (JOM) that he … Continue reading Management researcher admits to falsification, resigns

    in Retraction watch on March 21, 2018 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Male birth control pill passes a safety test

    A prototype contraceptive for men safely reduced testosterone and other reproductive hormones during a month-long treatment.

    in Science News on March 21, 2018 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Caught Our Notice: Former rising star loses fourth paper

    Title: Haemophilus influenzae responds to glucocorticoids used in asthma therapy by modulation of biofilm formation and antibiotic resistance What Caught Our Attention: This is the fourth retraction for Robert Ryan, formerly a high-profile researcher studying infections that can be deadly in people with lung diseases such as cystic fibrosis. In 2016, the University of Dundee … Continue reading Caught Our Notice: Former rising star loses fourth paper

    in Retraction watch on March 21, 2018 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Meet the giants among viruses

    For decades, all viruses were thought to be small and simple. But the discovery of more and more giant viruses shows that’s not the case.

    in Science News on March 21, 2018 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Why do we think of the future as being in front of us? New clues from study of people born blind

    GettyImages-811434286.jpgBy Alex Fradera

    Where is the future? The tendency in our culture – and most, but not all, others – is to compare the body’s movement through space with its passage through time: ahead are the things we are on our way to encounter. We intuit that the past is linked to the space behind and the future to that in front. But research in the Journal of Experimental Psychology: General has found that some Western people buck this tendency: those born blind.

    A team led by Luca Rinaldi of the University of Pavia recruited 17 normally sighted local participants and 17 participants of similar age who had early onset or congenital blindness and were unable to recollect any visual memories from their past. All participants were asked to sit at a desk, wear headphones and, if sighted, to wear a blindfold. Their task was to categorise the Italian words they heard over the headphones as quickly as possible as either a future or past word. The words were either adverbs like Prima (before) and Imminente (imminent) or verbs like Scrisse / Scriverà  (he wrote / he will write).

    To categorise, participants had to move their hand forward to press a more distant key, or backward to press a key nearer their body. In one block of the experiment, participants were asked to use the forward response for future words and back for past ones, and in another the reverse mapping was required.

    Consistent with previous research, the sighted participants responded more quickly in the congruent block (forward = future) than with the incongruent one. But the blind participants showed no such effect, exhibiting similar response speeds across both experimental blocks. This suggests that visual sensory experience is involved in binding of our sense of time to the sagittal (forwards and backwards) plane.

    As Rinaldi and her colleagues said , “what has yet to come tends to be visually located in the space in front of us,” and this optical experience, repeated over and over, appears to be what seals in this association until it becomes automatic. A visceral experience, rather than the more abstracted linguistic associations, is key.

    Intriguingly, the use or not of a forward-backward spatial metaphor may have implications for the way that sighted and blind people experience time. For sighted people, past research has shown that events X months in the future feel subjectively closer than an event X months in the past – this makes sense in terms of a forward-backward spatial metaphor considering that the future is something that is always looming (visually) closer, while the past is always retreating.

    To see if the same is true for blind people, Pavia’s team asked their participants to think about real past events and anticipated future ones that were the same amount of time away (e.g. three days ago/three days ahead) and to say how close they felt psychologically. The sighted participants, but not the blind participants, felt that future events were psychologically closer than past events the same duration away.

    Blind people do show some time-space mappings: they, like sighted people, associate past events with the left-hand side of space rather than the right. This may reflect associations built up through reading, as both Western alphabetic text and braille share a left-to-right format. It seems that, whether sighted or blind, our minds use whatever reference points are available to make sense of the dimension that colours everything but can never quite be grasped – the experience by which, in the words of St Augustine, “what is present may proceed to become absent.”

    The ego-moving metaphor of time relies on visual experience: No representation of time along the sagittal space in the blind

    Alex Fradera (@alexfradera) is Staff Writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 21, 2018 08:58 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    International Day of Forests at BMC Series

    The 21st of March 2018 marks the 6th International Day of Forests. While New Zealand spends the day making sustainability sexy, and the UN announces the winner of its photo contest, the BMC Series blog is celebrating with a roundup of four fantastic BMC Ecology publications from 2017-2018. These publications investigate the mechanisms of mixed forests, the relationships between flora and fauna, how plant communities are shaped and species pool estimation methods. The result of all of this research is an increased understanding of our forest ecosystems, and the species which reside within them.

    Root adaptations in co-cultures of trees – How forests grow and adapt

    Camphor Tree

    When forests consist of a mixture of species, rather than just one (a mono-culture), they are more likely to enhance ecological processes and sustainability. This is due to improved soil properties, environmental benefits and complementary resource use. The compromises and foraging strategies of different species determines the dynamics and structure of these plant communities.

    Research by Shu et al investigated the effect on vertical distribution, root biomass, and form in a mixed plantation of Chinese red pine and camphor tree, as well as in their respective monocultures after 10, 24 and 45 years of growth. Their results indicated that the amount each species yields, when grown in a mixed culture, correlates with the amount of interaction and competition between roots below the ground.

    It was also found that the camphor tree invested more carbon into its root biomass in the mixed culture. However, the Chinese red pine adapted to the mixed culture by modifying its vertical distribution and root form flexibility. These different root foraging strategies could help to demonstrate the different forest growth strategies for co-occurring species, as well as contribute to the failure or success of a species over time.

    The relationship between Ryukyu flying-foxes and fig trees in East-Asian subtropical island forests

    Ryukyu flying-fox. By Koolah (Koolah (talk)による撮影)

    Fig trees are valuable resources for many animals in a forest as they provide food and shelter for a wide variety of species. Flying-foxes are major consumers, and therefore seed dispersers, of many fig species within forests. Lee et al studied the Ryukyu flying-fox foraging dispersion and the relationships with tree species composition and fig abundance in the forests of Iriomote island in East Asia.

    The researchers found that Ryukyu flying-fox density on the island was positively dependent on the relative density of total figs, particularly the Hauli tree and the Common Red-stem Fig which are both dispersed by bats, including flying-foxes. The Ryukyu bats used these species as their predominant foods.

    It was also found that intermediate levels of crop sizes of these figs were most adept for the solitary foraging which the Ryukyu flying-foxes undertake. The researchers suggest that if the density and average coverage of these predominant species decrease below a certain level, so to do their chances of attracting and therefore being dispersed by the bats which are attracted to them.

    How ecological processes affect community structure in temperate forests in North Eastern China

    Forest in Jiaohe, Jilin Province. 西風

    Research conducted by Fan et al. seeks to understand the relative importance of different ecological processes in affecting the community structure of a forest. Their study seeks to understand how taxonomic structure is affected by the scale of a forest community; how the taxonomic structure may effect various processes, such as competition between species, at a local scale; whether the effect of local processes on the taxonomic structure varies with forest community scale; and whether these analyses provide similar insights when compared to the use of phylogenetic information.

    The researchers found that the effect of abiotic factors is greater that the effects of interspecific competition on shaping the local community at nearly every scale. They also found that local processes did influence the taxonomic structures, but their combined effects varied with the scale of the forest community. The taxonomic approach provided similar insights as the phylogenetic approach and consequently, analyzing the taxonomic structures may be a useful tool for communities where suitable phylogenetic data is not available.

    Estimating Species Pools for a Single Ecological Group

    Barro Colorado Island of central Panama. By Photos courtesy of Christian Ziegler.

    A species pool is a set of species occurring in a particular region. Research by Shen et al. seeks to develop a statistical method for estimating species pools for a single local community.  With only limited local abundance information, they developed a simple method to estimate the area and richness of a species pool for a local community.

    The research took place in the Barro Colorado Island of central Panama. Their model predicted that the local species pool for the 0.5  km2 plot was almost the whole island. Tree species richness in this pool was estimated at approximately 360. Further statistical tests indicated that the true values of species richness and area size for the hypothetical species pool were covered by the 95% confidence intervals of the true values.

    The statistical method that has here been developed may fill a gap in knowledge on estimating species pools for a single local ecological assemblage with limited information.


    The post International Day of Forests at BMC Series appeared first on BMC Series blog.

    in BMC Series blog on March 21, 2018 08:34 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Ketogenic diets and brain health (part 4)

    Is a diet where you can eat chocolate-dipped bacon AND lose weight too good to be true? Part 4 of our series on the gut-brain axis, diet and brain health. 

    Ketogenic (keto) diets have become increasingly popular for achieving rapid weight loss by eating a high fat, very low carbohydrate diet.

    On a “normal” diet, glucose is used for energy, so excess fats are used as a secondary fuel and often stored in adipose tissue. In a ketogenic diet, a low intake of carbohydrates induces the body into a state known as ketosis to power cells, including neurons. By transitioning the body to burn fat instead of carbohydrates (glucose), weight loss can be achieved.

    Ketosis is initiated by the body to help us survive when food intake is low. Ketones (or ketone bodies) are made in the liver when the body uses fat for energy, and these can be used as an alternative energy source by cells.

    To check that ketones are being produced, urine or blood samples need to be checked daily. However, small shifts in diet through increased carbohydrate consumption can quickly shift the body out of ketosis.

    How does ketosis impact the brain?

    Ketosis increases production of gamma-aminobutyric acid (GABA), which is an inhibitory neurotransmitter in the brain. Many anti-anxiety drugs such as benzodiazepines have a calming effect by mimicking the effects of GABA in the brain. So the theory is that increased GABA via ketosis may reduce stress and anxiety symptoms.

    The ketogenic diet has been used since the 1920s diet as a well-established treatment option for children with epilepsy. Ketogenic diets reduce seizures in about a third of children who are unresponsive to standard antiepileptic drug therapies.

    Although the mechanism by which ketogenic diets can reduce seizures is not fully understood, the increase in GABA may be central to their effectiveness.

    Ketosis boosts and protects brain cells

    Research suggests that ketones such as beta-hydroxybutyrate may be a more efficient fuel for neurons than glucose. In particular, ketones increase numbers of mitochondria (cellular “power houses”) in neurons helping them to function more efficiently.

    Studies have suggested that the ketogenic diet may reduce neuroinflammation. Ketogenic diets increase levels of glutathione – a potent antioxidant. This may protect the brain against oxidative stress that can lead to neuroinflammation and DNA damage.

    Ketogenic diets and mental health

    Laboratory research on rats and mice suggests that ketogenic diets could potentially benefit a number of brain health conditions.

    The mood stabilising effects of ketogenic diets may be useful in the treatment of mania and bipolar disorder. In particular, ketogenic diets can normalize neuronal excitability, increasing the synchronisation of neuron signalling.

    High ketone diets fed to transgenic mice that develop pathologies seen in Alzheimer’s disease can normalize some brain changes, but no studies have been conducted with dementia patients yet. Ketogenic diets have been shown to enhance memory performance in adults with mild cognitive impairment, a major risk factor for the development of dementia including Alzheimer’s disease.

    However, a 2017 review published in Frontiers of Psychiatry  exploring a role for the diet (KD) in treating anxiety, depression, bipolar disorder, schizophrenia, autism spectrum disorder, and ADHD in humans found “insufficient evidence for the use of KD in mental disorders.”

    Despite a long rich history of using the diet to treat epilepsy, it has received little attention with few studies other than case reports, small sample size open studies, and no controlled trials.

    Therapeutic potential of ketogenic diets

    Thus, the current best advice is: the ketogenic diet SHOULD NOT be implemented without the guidance of a medical professional and careful monitoring.

    Animal studies suggest that the ketogenic diet may be useful to treat some mental health conditions, but the same studies also report animals maintained on ketogenic diets long-term show signs of chronic stress.

    Although ketogenic diets can alter aspects of neurochemistry that may benefit the brain, neuroscientists still need to pinpoint the mechanisms by which these occur, and robust clinical studies in people are currently lacking.

    Final thoughts

    Dietician Joe Leech provides a great overview of the keto diet (and its precursor ‘paleo’ on his website). Leech points out that although the diet has health benefits (e.g. epilepsy treatment), the average person is unable to maintain that diet for longer than one or two months. He says,

    The ketogenic diet is a fad diet because it’s unsustainable and you can’t cheat. Even one of the earliest influencers no longer follows the diet because it’s too restrictive.

    The post Ketogenic diets and brain health (part 4) appeared first on Your Brain Health.

    in Yourbrainhealth on March 21, 2018 03:17 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    5 things we’ve learned about Saturn since Cassini died

    The Cassini spacecraft plunged to its death into Saturn six months ago, but the discoveries keep coming.

    in Science News on March 20, 2018 07:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How obesity makes it harder to taste

    Mice that gained excessive weight on a high-fat diet also lost a quarter of their taste buds.

    in Science News on March 20, 2018 06:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Kids are starting to picture scientists as women

    An analysis of studies asking kids to draw a scientist finds that the number of females drawn has increased over the last 50 years.

    in Science News on March 20, 2018 03:27 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    UCSF-VA investigation finds misconduct in highly cited PNAS paper

    PNAS has corrected a highly cited paper after an investigation found evidence of misconduct. The investigation—conducted jointly by the University of California, San Francisco, and the San Francisco Veterans Administration Medical Center—uncovered image manipulation in Figure 2D, which “could only have occurred intentionally.” The institutions, however, could not definitively attribute the research misconduct to any … Continue reading UCSF-VA investigation finds misconduct in highly cited PNAS paper

    in Retraction watch on March 20, 2018 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Struggle with emotional intensity? Try the “situation selection” strategy

    GettyImages-511062060.jpgBy Christian Jarrett

    If you are emotionally sensitive, there are mental defences you can use to help, like reappraising threats as challenges or distracting yourself from the pain. But if you find these mental gymnastics difficult, an alternative approach is to be more strategic about the situations that you find yourself in and the company you keep. Rather than grimacing as you endure yet another storm of emotional angst, make a greater effort to plan ahead and seek out the sunlit places that promise more joy.

    As the authors of a new paper in Cognition and Emotion put it: “Situation selection provides an alternative strategy for individuals that does not rely on in-the-moment cognitive resources, and allows reactive and/or less competent individuals to tune their environment in order to promote certain emotional outcomes.”

    Thomas Webb at the University of Sheffield and his colleagues first surveyed 301 volunteers (average age 36; 62 per cent were female) using a newly developed 6-item measure of situation selection. For instance, participants rated how much they “select activities that help me to feel good” and “steer clear of people who put me in a bad mood”. The participants also completed other questionnaires that gauged their happiness and emotional sensitivity, among other things.

    Although purely correlational, the findings supported the researchers’ predictions: overall, participants who scored higher on situation selection also tended to report lower levels of negative mood and depression. Moreover, specifically among emotionally sensitive participants who admitted that they found it difficult to regulate their emotions, situation selection was also associated with greater life satisfaction and happiness.

    Situation selection sounds obvious but how often can we say that we really think strategically in this way? A lot of the time our plans are based more on habit or passive acceptance of other people’s suggestions.

    To provide a preliminary test of whether encouraging greater situation selection might be a useful strategy, especially for more emotionally vulnerable people, the researchers conducted a second study over a weekend with 125 more volunteers. On a Friday, the participants completed several psychological questionnaires, including tapping their emotional sensitivity. Half the participants were then given the following instruction, designed to foster greater situation selection, and asked to repeat it to themselves three times and fully commit to it:

    If I am deciding what to do this weekend, then I will select activities that will make me feel good and avoid doing things that will make me feel bad!

    Then, on the following Monday, all the participants provided a breakdown of their weekend activities and how they’d felt during each one. People generally engage in a fair amount of situation selection anyway, but the manipulation worked in that those who received the instruction subsequently scored higher on situation selection than the controls who did not receive the instruction. But the main take-away is that participants who received the situation selection instruction experienced more positive mood over the weekend, compared with the controls, and this was especially the case for the more emotionally sensitive participants.

    The researchers concluded: “Notwithstanding the limitations of the studies … the present research underlines the potential for using situation selection to successfully navigate emotional life and suggests several directions for future studies on this relatively under researched emotion regulation strategy.”

    Those future studies might include looking at whether the effectiveness of the situation selection approach is moderated by how good people are at judging how they will feel in different situations, which is what psychologists call “affective forecasting” – something we’re typically not very good at. For instance, many of us underestimate how good we’ll feel after doing some exercise.

    More awkward issues will also need to be dealt with, such as how to balance the aim of reducing people’s emotional discomfort in the moment against their longer-term goals, which might necessitate navigating emotionally challenging situations. Indeed, Cognitive Behavioural Therapy involves addressing safety behaviours (think of the socially anxious person who avoids public speaking) that a person uses to reduce their emotional discomfort, but which in the long-term can exacerbate their emotional vulnerabilities or hinder their ambitions.

    Other sceptical readers may agree that situation selection sounds like an appealing approach, but wonder what to do about life getting in the way – first they’ve got that meeting with their grumpy boss then a visit with their ill parent then they must pick the kids up from school then make dinner, then ….

    Situation selection is a particularly effective emotion regulation strategy for people who need help regulating their emotions

    Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on March 20, 2018 09:35 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How much is too much? Does increasing use of social media having a damaging effect on young girls?

    In December, British Health Secretary Jeremy Hunt warned Facebook to ‘stay away from my kids’ after it launched a messenger service aimed at under 13s, and in February,  the House of Commons Science and Technology Select Committee launched a new inquiry to examine the health risks to children and young teens of increasing amounts of time on social media.

    For tweens and early teens, the rise in time spent on Snapchat, WhatsApp and other social media is really quite dramatic. Our new study set out to look at patterns of behavior among 10-15-year-olds in the UK, and their levels of well-being, to see if all this time spent online was having a detrimental impact on their mental health.  We found that teenage girls are by far the highest users of social media, and those who are using it for more than an hour a day also have more well-being problems in later teen years.

    We used the youth participants’ data from the UK household longitudinal study, Understanding Society, following almost 10,000 young people from diverse backgrounds, across the whole country, over 5 years.

    Ten percent of ten-year-old girls reported spending one to three hours a day (compared with 7 percent of boys) and this increased to 43 percent of girls at age 15 (and 31 percent of boys).

    We asked the young people to report on how much time they spent on social media on a ‘normal school day.’  A few reported no internet access or no time spent at all, but some were on it four hours or more.  Ten percent of ten-year-old girls reported spending one to three hours a day (compared with 7 percent of boys) and this increased to 43 percent of girls at age 15 (and 31 percent of boys).

    Two measures of well-being were assessed for these young people. The first was a combined score of questions that asked about satisfaction with schoolwork, friends, family, appearance, school and life as a whole. The second measure was based on a well-established questionnaire which asked the young people about their social and emotional difficulties.

    At age 10, girls who interacted on social media for an hour or more on a school day had worse levels of well-being compared to girls who had lower levels of social media interaction. Additionally, these girls with higher social media interaction at age 10 were more likely to experience more social and emotional difficulties as they got older.

    For both boys and girls, levels of happiness decreased between the ages of 10 and 15, but the decrease among girls was greater than that of boys.

    What makes girls different?

    Girls may participate in more comparisons of their own lives with those of the people they are friends with or follow. Viewing filtered or photo-shopped images and mostly positive posts may lead to feelings of inadequacy and poorer well-being. Girls may also feel more pressure to develop and maintain a social media presence than boys. Social media presence requires constant updating and having friends share or like their content. If their perceived popularity decreases over time, there may be a resultant increase in social and emotional difficulties.

    Boys, on the other hand, are much more likely to participate in gaming than social media, which is not covered by our study. Thus boys’ levels of well-being may be more related to their gaming and skill level.

    What can we do?

    So what can be done to help protect young people from the potential damage to their mental health? Social media interaction does not appear to be a short-lived phenomenon.  The recent Children’s Commissioner for England’s report, Life in Likes, suggests imploring social media platforms to check underage use and better preparing children for life in a digital age. The recommendations did not discuss potential gender differences, but the findings from our study suggest that boys and girls can have different emotional and well-being responses to high levels of social media interaction.

    There have also been calls for the technology industry to look at built-in time limits. Our study really backs this up – increasing time online is strongly associated with a decline in well-being in young people, especially girls.  Young people need access to the internet for homework, for watching TV and to keep in touch with their friends, of course,  but do they really need to spend one, two, three or four hours chatting, sharing and comparing on social media every school day?

    The post How much is too much? Does increasing use of social media having a damaging effect on young girls? appeared first on BMC Series blog.

    in BMC Series blog on March 20, 2018 08:45 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Over a dozen editorial board members resigned when a journal refused to retract a paper. Today, it’s retracted.

    Following a massive editorial protest, Scientific Reports is admitting its handling of a disputed paper was “insufficient and inadequate,” and has agreed to retract it. The 2016 paper was initially corrected by the journal, after a researcher at Johns Hopkins University, Michael Beer, accused it of lifting some of his earlier work. After we covered … Continue reading Over a dozen editorial board members resigned when a journal refused to retract a paper. Today, it’s retracted.

    in Retraction watch on March 20, 2018 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    First pedestrian death from a self-driving car fuels safety debate

    A self-driving Uber kills woman in Arizona in the first fatal pedestrian strike by an autonomous car.

    in Science News on March 19, 2018 10:24 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Some TRAPPIST-1 planets may be water worlds

    Two of TRAPPIST-1’s planets are half water and ice, which could hamper the search for life.

    in Science News on March 19, 2018 09:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    PLOS Extends a Profound Thank You

    To all our Reviewers, Guest Editors and Editorial Board members, thank you! 2018 marks the fourth year that we formally and publicly acknowledge our community of reviewers and editors for sustaining public access to rigorous peer-reviewed research, enhancing our journals’ abilities to communicate the work of researchers and communities, and inspiring the work of our staff.

    A Passion to Go Beyond

    Our contributor community selflessly guides papers through the editorial process and provides feedback to authors. Many also actively stand up for Open Access, open data and Open Science as they attend conferences, debate with colleagues or share research via social media, email or even, yes, conversation! Our reviewers and editors devote time and energy to PLOS Collections, Special Issues, PLOS Channels, Editorials, Perspectives, blog posts, interviews, career advice, educational material and more to enrich the primary literature, provide context and engage the public. These endeavours help make PLOS a unique place to publish and for that we are greatly appreciative.

    Those who volunteer their services to PLOS, and the greater scientific community, are more than just dedicated scientists: they share an entrepreneurial spirit as we advance scholarly communication with requirements for ORCID iDs for corresponding authors, implementation of CRediT for author contributions, community engagement for improved evaluation, integration of preprint Editors and experimentation with software development. Throughout all of these innovations, some successful (108,000 ORCID entries in our system uniquely identify authors to ensure complete and accurate recognition of work, regardless of changes to name or institution) and others less so, PLOS continues to be, in the words of our CEO, an organization “willing to take risks in order to best serve scientific communities.” We have tremendous appreciation for Reviewers, Guest Editors and Editorial Board Members who are confident to travel the road with us toward a world of readily discoverable, freely available, thoroughly reliable and fully reusable research outcomes.

    An Impressive Community Workload

    Continuing our quest for transparency in the publishing process, we include the number of newly submitted and published research articles brought to the public in 2017 in each of the seven journals’ thank you articles. We released this information for the first time last year and will continue to do so as it provides additional insight and appreciation for the workload of our reviewers and editors. That workload in 2017 supported publication of more than 23,000 research articles.

    Our global network of more than 74,000 reviewers and 7,200 editors ensures that Research Articles, Perspectives, Editorials and more achieve the highest quality possible. The more than 15 million article views per month (on average) this past year hints at the enthusiasm that PLOS reviewers and editors share, for science and scientists. Enthusiasm is not enough, however. This geographically diverse contributor community also shares a commitment to responsible and fair examination of the science, ethics, reporting guidelines, data availability and journal publication criteria associated with each submission.

    Efforts to Ease Process and Enrich Training

    We’ve listened to our reviewer and editor communities who want more training, especially Early Career Researchers. In response we developed the PLOS Reviewer Center, to provide detailed, journal-agnostic peer review guidance from experienced researchers, staff editors, Editorial Board members, and other reviewers. The Reviewer Center is still under development, so we encourage you to take a look around and let us know what you find useful or missing via the feedback form.

    Through the PLOS Reviewer Center anyone can:

    • Learn the basics of peer review and get helpful tips for handling reviewer tasks, from accepting a review invitation to completing a review
    • Access video, templates, checklists and other customized tools for reviewers
    • View recent articles and commentary about trends and studies on peer reviewers, the peer review process, and other related topics

    In addition to the Reviewer Center, we’ve made more comprehensive our guidelines for Reviewers and Editors, provided editors more journal transfer options to accelerate publication of peer-reviewed manuscripts, made reviews public to increase transparency of the review process and experimented with signing reviews. A detailed overview of the Editorial and Peer Review Process is available on all journal sites.

    For the Record

    A published and citable journal article thank you provides reviewers and editors recognition and an academic citation for their inspired service to colleagues, institutions, funders and the public. Each reviewer’s and editor’s name is listed in the Supporting Information of each journal’s published article; links to these articles are below.

    We’re in the midst of expanding the size and scope of the PLOS ONE Editorial Board to achieve stronger subject area coverage across all relevant disciplines. If you’d like to learn more, please email us at edboardmgmt@plos.org.

    Once again, thank you!

    in The Official PLOS Blog on March 19, 2018 06:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Universal Linguistic Decoders are Everywhere

    Pereira et al. (2018) - click image to enlarge

    No, they're not. They're really not. They're “everywhere” to me, because I've been listening to Black Celebration. How did I go from “death is everywhere” to “universal linguistic decoders are everywhere”? I don't imagine this particular semantic leap has occurred to anyone before. Actually, the association travelled in the opposite direction, because the original title of this piece was Decoders Are Everywhere.1 {I was listening to the record weeks ago, the silly title of the post reminded me of this, and the semantic association was remote.}

    This is linguistic meaning in all its idiosyncratic glory, a space for infinite semantic vectors that are unexpected and novel. My rambling is also an excuse to not start out by saying, oh my god, what were you thinking with a title like, Toward a universal decoder of linguistic meaning from brain activation (Pereira et al., 2018). Does the word “toward” absolve you from what such a sage, all-knowing clustering algorithm would actually entail? And of course, “universal” implies applicability to every human language, not just English. How about, Toward a better clustering algorithm (using GloVe vectors) for inferring meaning from the distribution of voxels, as determined by an n=16 database of brain activation elicited by reading English sentences?

    But it's unfair (and inaccurate) to suggest that the linguistic decoder can decipher a meandering train of thought when given a specific neural activity pattern. Therefore, I do not want to take anything away from what Pereira et al. (2018) have achieved in this paper. They say:
    • “Our work goes substantially beyond prior work in three key ways. First, we develop a novel sampling procedure for selecting the training stimuli so as to cover the entire semantic space. This comprehensive sampling of possible meanings in training the decoder maximizes generalizability to potentially any new meaning.”
    • “Second, we show that although our decoder is trained on a limited set of individual word meanings, it can robustly decode meanings of sentences represented as a simple average of the meanings of the content words. ... To our knowledge, this is the first demonstration of generalization from single-word meanings to meanings of sentences.”
    • “Third, we test our decoder on two independent imaging datasets, in line with current emphasis in the field on robust and replicable science. The materials (constructed fully independently of each other and of the materials used in the training experiment) consist of sentences about a wide variety of topics—including abstract ones—that go well beyond those encountered in training.”

    Unfortunately, it would take me days to adequately pore over the methods, and even then my understanding would be only cursory. The heavy lifting would need to be done by experts in linguistics, unsupervised learning, and neural decoding models. But until then...

    Death is everywhere
    There are flies on the windscreen
     For a start
     Reminding us
     We could be torn apart

    ---Depeche Mode, Fly on the Windscreen


    1 Well, they are super popular right now.


    Pereira F, Lou B, Pritchett B, Ritter S, Gershman SJ, Kanwisher N, Botvinick M, Fedorenko E. (2018). Toward a universal decoder of linguistic meaning from brain activation. Nat Commun. 9(1):963.

    Come here
    Kiss me
    Come here
    Kiss me


    in The Neurocritic on March 19, 2018 05:21 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Caught Our Notice: Retraction eight as errors in Wansink paper are “too voluminous” for a correction

    Title: Shifts in the Enjoyment of Healthy and Unhealthy Behaviors Affect Short- and Long-Term Postbariatric Weight Loss What Caught Our Attention: Cornell food marketing researcher Brian Wansink, the one-time media darling who has been dogged by mounting criticism of his findings, has lost another paper to retraction. As we’ve noted in the past, corrections for … Continue reading Caught Our Notice: Retraction eight as errors in Wansink paper are “too voluminous” for a correction

    in Retraction watch on March 19, 2018 05:05 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Probe finds misconduct in eight papers by researcher in Sweden

    An external probe has concluded that a researcher based at the University of Gothenburg committed misconduct in multiple papers, all of which should be withdrawn. Among 10 papers by Suchitra Sumitran-Holgersson at the University of Gothenburg, an Expert Group concluded that eight contained signs of scientific misconduct. The Expert Group, part of Sweden’s Central Ethical … Continue reading Probe finds misconduct in eight papers by researcher in Sweden

    in Retraction watch on March 19, 2018 03:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Elsevier signs 3-year ScienceDirect deal with University of Montreal

    The agreement will provide access to all of Elsevier’s current titles and complete archives

    in Elsevier Connect on March 19, 2018 03:37 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Tree rings tell tale of drought in Mongolia over the last 2,000 years

    Semifossilized trees preserved in Mongolia contain a 2,000-year climate record that could help predict future droughts.

    in Science News on March 19, 2018 02:26 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dying Young and the Psychology of Leaving a Legacy

    Often the biggest existential distress that we carry is the idea that no-one will remember us when we are gone—initially we know that our friends and family will hold who we are, but after a generation, these people are likely gone too. At the end of life, the pressure to leave an unquestionably relevant legacy can be crippling for people, particularly for young people. When coupled with the limited energy that people have when they are unwell, the very nature of what people expect to achieve in the world shrinks, and the really important pieces come into focus.

    When time is seen to be limited, every moment can take on a weight that has never before been experienced. Some of these expectations come from within and some externally, but regardless of their origin they can be paralyzing for the young person facing their mortality, particularly when unwell. Culturally, there are multiple references as to what ‘dying young’ is meant to mean and most refer to extraordinary and often unobtainable expectations. For instance, members of the ‘27 club’ (celebrities who die on or before their 27th birthday) and notable cancer-related concepts around ‘bucket lists’ and works of fiction (e.g., The Fault in Our Stars). Most young people, particularly those who are dying, do not have the capacity or the options to engage in an extraordinary feat, they can become overwhelmed and paralyzed by what they are ‘meant to be doing’.

    I think I have well and truly missed my opportunity for greatness, I now just want enough energy to spend time with my friends. Maybe even go to the pub.

    ~18-year-old male

    Often, as is the case with many things in life, simple and small are the gestures and moments which are the most meaningful, with huge projects and adventures feeling too overwhelming and out of the grasp of someone with limited energy and resources. As such, the fantasy of what something may have looked and felt like, had they have been well, is a much more satisfying space for them to sit with. Similarly, relationships become much more meaningful, as do the simple things that are taken away through the treatment process, like being able to sit in the sun or go to the pub with a friend.

    ‘I had been playing online games with him for years, and I thought that I would never meet him now. He made it happen though.’

    ~19-year-old male

    Young patients can be bombarded with well-intentioned suggestions about what they ‘need’ to do, including making future legacy-based activities, such as leaving cards for each of their younger sibling’s birthdays, video journals of their death, or chronicling how they feel about all the people in their world. Although these are good ideas, they are emotionally and physically difficult to manage with limited resources. Patients need to be feeling very resilient and well before attempting any of these things with most being abandoned due to the confronting nature of conceptualizing the world without them present in it. It is a difficult ask for anyone to be able to take the relatively abstract idea of the world continuing following your own death; this does not change for young people and, in some ways, it is even more challenging due to their pervasive sense of self, even in the face of very real threats to their mortality.

    ‘I could clean out my room, and all of my stuff. But then I think, well I don’t want to do it really, and it’s not like it’s going to be my problem.’

    ~23-year-old male

    The way that young people respond to being presented with a very limited life expectancy can vary tremendously. Some may stick their head firmly in the sand and refuse to discuss or conceptualize anything about what may happen in the lead-up to their death, or following. Others will organize everything about the end of their lives, including where they want to die, how alert they want to be, as well as what will happen following their death—such as where their belongings go and how they want to be remembered. For most people in this situation, in an existential sense, almost everything is out of control, the disease will do what it does, the pain is what it is, and they are an observer to the things happening in their bodies. The things that people can control is what they talk about, how much they talk about it, and who they talk about it too.

    Just because death, dying, and legacy are not being talked about, does not mean that it is not in the consciousness and thoughts of the person pondering their own end. Instead, it may be that they have done as much thinking and talking about it as they need to do; it is often these patients that have very well-considered plans about what they want to happen as they deteriorate and the decisions that must be made about their care.


    Chochinov, H., Kristjanson, L., Breitbart. W., et al. (2011). Effect of Dignity Therapy on Distress and End-of-Life Experience in Terminally Ill Patients: A Randomised Controlled Trial. The lancet oncology. 12. 753-62. DOI:10.1016/S1470-2045(11)70153-X

    Hack, T., Mcclement, S., & Chochinov, H., et al. (2010). Learning from dying patients during their final days: Life reflections gleaned from dignity therapy. Palliative medicine. 24. 715-23. DOI:10.1177/0269216310373164

    Hedkte, L., (2014). Creating stories of hope: A narrative approach to illness, death and grief. Australian New Zealand Journal of Family Therapy. 35. 4-19. DOI:10.1002/anzf.1040

    Kehl, K., (2006). Moving Toward Peace: An Analysis of the Concept of a Good Death.. American Journal of Hospice and Palliative Medicine. 23. 277-286. DOI:10.1177/1049909106290380

    Smith, R. (2000). A good death: an important aim for health services and for us all. BMJ. 2000;320:129-130.

    Steinhauser, K. E., Clipp, E. C., McNeilly, M., et al. (2000). In search of a good death: observations of patients, families, and providers. Ann Intern Med. 2000;132:825-832

    Steinhauser, K. E., Alexander, S. C., Bycock, I., et al. (2008). Do preparation and life completion discussions improve functioning and quality of life in seriously ill patients? Pilot randomized control trial. Journal of Palliative Medicine. 11. 1234 – 1240. DOI:10.1089/jpm.2008.0078

    in Brain Blogger on March 19, 2018 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “Clear signs of manipulation” in paper co-authored by prominent geneticist

    A third paper co-authored by researchers based at a prominent lab whose work has been under investigation on and off for almost three years has been retracted. According to the notice, the university’s investigation found that a 2008 paper in FEBS Letters contained “clear signs of manipulation” in three figures. Research from geneticist David Latchman’s … Continue reading “Clear signs of manipulation” in paper co-authored by prominent geneticist

    in Retraction watch on March 19, 2018 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Psychologists have profiled the kind of person who is willing to confront anti-social behaviour

    GettyImages-456515809.jpgBy Alex Fradera

    “Lower your music, you’re upsetting other passengers.” Without social sanction, society frays at the edges. But what drives someone to intervene against bad behaviour? One cynical view is that it appeals to those who want to feel better about themselves through scolding others. But research putting this to the test in British Journal of Social Psychology has found that interveners are rather different in character.

    The French-Austrian collaboration team led by Alexandrina Moisuc conducted a series of studies asking participants to read hypothetical scenarios involving anti-social behaviour such as someone tearing up posters, spitting on the pavement, or throwing used batteries into a flower pot in a shared yard. The use of hypothetical scenarios and intentions can be considered a limitation of the study, as this may not truly reflect real-world action; on the other hand, it allowed the researchers to investigate a broader range of situations.

    Participants were asked how they would respond, on scales that ranged from total inaction through sighing to addressing the transgressor mildly or aggressively. They also rated how morally outraged they felt about the transgression, with higher ratings correlating strongly with a desire to intervene. In addition, participants rated themselves on their personality and other traits.

    Moisuc’s team thought that one candidate personality profile of an intervener could be the “Bitter Complainer”: a person with low self-esteem who uses hostility towards others to feel better about themselves. There is some limited past evidence to support this view: for instance, experiments that make people feel more insecure lead them to judge others more harshly. Social sanctions are effectively a form of “altruistic punishment” (because they are for the wider social good) and some research on punishment in economic games shows that low-empathy individuals are more willing to punish others. On the other hand, the researchers anticipated that perhaps people with a personality more akin to the archetype of a  strong leader might be more inclined to step in.

    Moisuc and her colleagues found that traits like self-esteem and low levels of social capital – the “bitter” components – and also traits associated with lashing out, such as aggressiveness, poor emotional regulation, and social dominance orientation (seeing the world as hierarchical and so potentially wanting to put others down to keep yourself up), had no, or even a negative, relationship with preparedness to act. This was true in student samples in Austria and France, and a further French non-student sample; in total around 1100 participants.

    The personality factors that were associated with an intention to speak out included extraversion, confidence, persistence, being good at regulating emotions, valuing altruism and being comfortable expressing opinions. Those who already felt socially accepted, and happy to take on social responsibility – such as voting and paying taxes – were also more likely to say they would intervene. This is a very different picture from the Bitter Complainer. These traits are related to successfully managing difficult situations in teams, and to taking the risk to whistle-blow on organisations. Accordingly, Moisuc’s group characterises this as the “Well-adjusted Leader.”

    The data also indicated a connection between willingness to intervene and holding  anti-prejudicial attitudes: lower “social dominance” was associated with speaking up, both in the more generic scenarios and a subset that involved racist or sexist behaviour.

    It can be convenient to explain away other people’s pro-social behaviour as selfishly motivated, as that justifies our own inaction. These findings undermine that negative interpretation, suggesting that those who intervene are those well-adjusted to deal with difficult encounters, and a sense of responsibility toward their environment and the greater good. This is a call to examine ourselves: what do we need to set right in our own lives so that we can defend the world we truly want?

    Alex Fradera (@alexfradera) is Staff Writer at BPS Research Digest

    Individual differences in social control: Who ‘speaks up’ when witnessing uncivil, discriminatory, and immoral behaviours?

    in The British Psychological Society - Research Digest on March 19, 2018 09:24 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Will Smith narrates ‘One Strange Rock,’ but astronauts are the real stars

    Hosted by Will Smith, ‘One Strange Rock’ embraces Earth’s weirdness and explores the planet’s natural history.

    in Science News on March 18, 2018 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Selective Skepticism of Lynne McTaggart

    Lynne McTaggart is an author and leading alternative health proponent who was the foil for my first ever Neuroskeptic post, nearly 10 years ago. Ever since then I have occasionally been following McTaggart's output. McTaggart is believer in things like a "Zero Point Field (ZPF), a sea of energy that reconciles mind with matter", an opponent of vaccines, and someone who thinks that spiritual and psychological change can cure advanced cancer. Since my first post, I haven't written mo

    in Discovery magazine - Neuroskeptic on March 17, 2018 06:23 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: No reproducibility crisis?; greatest corrections of all time; an archaeology fraud

    The week at Retraction Watch featured the retraction of a paper on homeopathy whose authors had been arrested; news about 30 retractions for an engineer in South Korea; and a story about how two stem cell researchers who left Harvard under a cloud are being recommended for roles at Italy’s NIH. Here’s what was happening … Continue reading Weekend reads: No reproducibility crisis?; greatest corrections of all time; an archaeology fraud

    in Retraction watch on March 17, 2018 01:29 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Inked mice hint at how tattoos persist in people

    Tattoos in mice may persist due to an immune response, challenging currently held beliefs about how the skin retains tattoos.

    in Science News on March 16, 2018 08:22 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Former NYU researcher falsified data in 3 papers, 7 grants: ORI

    A former researcher at New York University falsified and/or fabricated data in multiple papers and grant applications, according to the U.S. Office of Research Integrity. Bhagavathi Narayanan has already retracted three papers, the result of missing original data. Among the three papers flagged by the ORI, only one remains intact: A 2011 paper in Anticancer … Continue reading Former NYU researcher falsified data in 3 papers, 7 grants: ORI

    in Retraction watch on March 16, 2018 06:52 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    No two brain injuries are identical: The future of fMRI for assessment of traumatic brain injury By Ekaterina Dobryakova

    Brain imaging is an important tool for clinicians in diagnosing patients who have suffered from traumatic brain injury (TBI). Brain imaging techniques generally focus on either structure or function. With TBI, the focus is typically on the extent of structural brain damage, which is often assessed using computed tomography (CT).  Structural brain scans provide information regarding the severity of TBI, which is largely determined by the extent of damage. But, what about measures of brain function?

    Another brain imaging technique that has become a crucial instrument for scientists trying to learn more about how the brain works is functional magnetic resonance imaging (fMRI). fMRI allows the examination of human brain function in a way that is not invasive and, in contrast to a CT scan, does not involve radiation. With the help of math and statistics, brain mappers are able to measure brain activity patterns. But, can fMRI also be used as a diagnostic tool for TBI? Because (a) no two brain injuries are identical and (b) the way in which brain injuries affect cognition and brain function is highly variable, the current picture of fMRI use as a diagnostic tool for TBI is unclear.

    Nevertheless, new tools and techniques have recently been developed that allow for the assessment of brain function in TBI, as well as other types of brain injury. Using fMRI could thereby add a whole new dimension to our understanding of TBI and TBI recovery. To get a better sense of the present state of fMRI applications with TBI, we have asked three TBI experts the following question:

    Given that the utility of fMRI is still relatively undefined in the clinical realm, how do you see modern neuroimaging techniques playing a role in TBI in the future, beyond conventional scanning (CT, structural MRI)?


    Erin Bigler

    Professor of Psychology and Neuroscience
    Department of Psychology & Founding Director

    Magnetic Resonance Imaging (MRI) Research Facility at Brigham Young University:

    “I believe that there is tremendous potential for clinical applications of fMRI in TBI, but not as standalone, independent metrics of brain function. Given the uniqueness of each individual and the heterogeneity of TBI, no two brain injuries are ever identical. As such, it is unlikely that there would ever be a universal fMRI signal that would consistently differentiate or be influenced by brain injury. However, fMRI activation paradigms in response to a cognitive task can probe neural system integrity, which can also be assessed through resting state functional connectivity mapping. Integrating this information with a dynamic structural imaging approach that takes advantage of multiple methods to assess volume, thickness, and shape along with lesion analysis would provide both structural and functional information about the brain injury. Additionally, these anatomical or functionally defined networks could then be the basis for using diffusion tensor imaging (DTI) to explore tract integrity between regions of interest within the network.”


    Frank Hillary

    Associate Professor of Psychology

    Department of Psychology, Penn State University:

    “FMRI has provided previously unavailable opportunities to advance our understanding of the organization of human brain functioning at the systems level. There has been meaningful extension of work in the cognitive neurosciences to understand plasticity in brain disorders with several important themes emerging from fMRI work in TBI (e.g., neural recruitment, “compensation”). By contrast, after nearly two decades of fMRI research designed to establish diagnostic biomarkers in various forms of mild TBI, no reliable neural signature for injury has emerged. I anticipate that future work in this area will be met with continued failure due to individual differences and the lack of clinical specificity in task-based fMRI findings.  More recent integration of network neuroscience into fMRI research has transformed the landscape of TBI research with goals now directed toward understanding injury-induced plasticity in large-scale networks (e.g., default mode network). While in its infancy, research that couples network science with brain stimulation techniques (e.g., transcranial magnetic stimulation) represents at least one possible avenue for functional MRI to contribute meaningfully to clinical intervention in TBI. To date, however, fMRI has primarily contributed to basic science and its potential to advance clinical diagnostics and intervention in TBI remains largely unrealized.”


    Brenna McDonald

    Associate Professor of Radiology and Imaging Sciences,

    Center for Neuroimaging at Indiana University School of Medicine:

    “FMRI and advanced structural MR techniques such as DTI have come into increasing use to study behavioral and cognitive changes after TBI. In addition to being used to explore the underlying structural and functional neural correlates of TBI, such techniques have the potential to help guide treatment approaches. For example, fMRI has been used to study the neural correlates of symptom reduction following pharmacological and behavioral treatments for cognitive symptoms after TBI. The understanding gained can be used to further refine treatments. For example, in mild TBI/concussion, use of imaging can help validate proposed screening and diagnostic tools, such as sideline assessments, cognitive or behavioral instruments, biomechanical force metrics, sensory or vestibular testing, or blood biomarkers. Demonstration of a correlation between such measures and imaging variables, such as task-related fMRI activation, resting cerebral blood flow, or structural or functional connectivity, in concert with complementary data such as cognitive assessment and postconcussive symptom measurement, could provide the evidence needed to demonstrate utility of such screening or diagnostic tools. More thorough knowledge of these interactions will help advance the goals of personalized medicine initiatives by improving prediction of injury risk, outcome, and treatment response.”


    Alongside structural brain imaging techniques that are routinely used in TBI, fMRI may provide a better understanding of brain function in TBI patients. By combining structural, functional, and cognitive assessments, clinicians may evaluate individual cases in a more specific way and adapt their treatments accordingly. After all, TBI is a very heterogeneous condition and impacts brain recovery differently in different people. This heterogeneity may become more manageable once we reach a better understanding of brain function. At the very least, fMRI offers new hope that treatments could be tailored more towards the individual. If and how fMRI fits toward this goal of “personalized medicine” will become clearer in the years to come.

    Acknowledgments: The author thanks Erin Bigler, Frank Hillary, and Brenna McDonald for providing their valuable opinion and Veronica Schneider for editing the feature image.

    Any views expressed are those of the author, and do not necessarily reflect those of PLOS. 

    Ekaterina Dobryakova is Research Scientist in the TBI Department at the Kessler Foundation, and Research Assistant Professor in the Department of Physical Medicine and Rehabilitation at Rutgers New Jersey Medical School. She is also a member of the Communications Committee for the Organization for Human Brain Mapping.

    in PLOS Neuroscience Community on March 16, 2018 06:45 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Delays, arguing over upcoming Cell retraction leave first author “devastated”

    After being “blindsided” a few months ago when she was told one of her 2005 papers was going to be retracted, a researcher scrambled to get information about why. And when she didn’t like the answers, she took to PubPeer. Eight days ago, Shalon (Babbitt) Ledbetter, the first author of the 2005 paper published in … Continue reading Delays, arguing over upcoming Cell retraction leave first author “devastated”

    in Retraction watch on March 16, 2018 03:24 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What we can and can’t say about Arctic warming and U.S. winters

    Evidence of a connection is growing stronger, but scientists still struggle to explain why.

    in Science News on March 16, 2018 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Caught Our Notice: Voinnet co-author issues another correction

    Title: AtsPLA2-α nuclear relocalization by the Arabidopsis transcription factor AtMYB30 leads to repression of the plant defense response What Caught Our Attention:  A previous collaborator with high-profile plant biologist Olivier Voinnet (who now has eight retractions) has issued an interesting correction to a 2010 PNAS paper. Susana Rivas is last author on the paper, the … Continue reading Caught Our Notice: Voinnet co-author issues another correction

    in Retraction watch on March 16, 2018 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Carpe Diem—Living with Fear

    “Live life to the fullest.”
    “Celebrate life.”
    “Carpe diem.”

    I’ve heard them all. But what if I don’t feel like it? What if I’m having a lousy brain day, restricted to a darkened room with a blinding headache, and seizing the day is not an option?

    I have clusters of malformed blood vessels called cavernous angiomas in my brain. Two of them bled, turning my life upside down with seizures and other symptoms. A few months later, I underwent resection surgeries to prevent future bleeds.

    The surgeries wreaked additional havoc—headaches, seizures, fatigue, short attention span and memory loss, vertigo and poor balance, as well as severe depression. During the first couple of months post-surgery, my world revolved around my recovery. I was in survival mode, often fearful, often feeling alone. On good days, I took it one day at a time. On bad days (and there were many), I slid back three steps for every half step forward. There wasn’t much I could seize on those days.

    A year into my recovery, I finally had the wherewithal to join the Angioma Alliance, an online support group for angioma patients. Through the website, members connect with each other, sharing war stories, sometimes asking questions but more often seeking reminders that we are not alone in our struggles.

    All of us cavernous angioma patients live with an ax hanging over (or inside) our heads. There’s always a chance of a bleed, especially from an angioma that has bled before. Angiomas can cause symptoms even when they haven’t bled. A resected (surgically removed) angioma can grow back. Many of us who have the familial form of the disease have many angiomas and can generate new ones throughout our entire lives.

    Those of us who are good candidates for brain surgery, where the benefits outweigh the risks of surgery, are considered the lucky ones. One of the members of the Alliance has an angioma located in her brain stem. Unfortunately, it is inoperable. My friend is scared of the very real possibility of a bleed causing her heart to stop beating or to suddenly take away her ability to breathe. Her fears often paralyze her, preventing her from taking life by the horns.

    My fears emerge when a new symptom appears or a new manifestation of an old one emerges: is it a sign of a new bleed? Is a new angioma forming?

    These days, more than ten years since the surgeries, my good days outnumber the bad. Most of the time, my fears hide beneath the surface, and when they do come out of hiding, they rarely paralyze me.

    I should be able to seize the day.

    I have several friends who are breast cancer survivors. Sheryl, at the age of seventy, learned to fly-fish and dragon boat. She paddles competitively and participates in national and international dragon boat races.

    Darlene didn’t even jog before her diagnosis; now she runs marathons. She rarely traveled out of town, and now she travels frequently and extensively. She’s tried sky-diving, attends glitzy shows, and throws frequent pool parties.

    Are these inspiring activities the only ways that count as living life to the fullest? Should I seize and celebrate life like my breast cancer survivor friends?

    I have absolutely no interest in sky-diving or learning to fish. Glitzy shows have never been my thing, and I do my best to avoid parties.

    Is it a matter of personality? Perhaps if I were as gregarious as my friends, I would live more like them. They may not have been as daring pre-cancer, but were they as gregarious as they are now? Perhaps they only developed that side of their personalities after the challenges of treatment and recovery. Was I supposed to have become more outgoing?

    Having had to take a crash course in asking for help and admitting my weaknesses, I have become better at connecting with people. I’m not as extroverted as Sheryl and Darlene, but I am more outgoing than I was pre-surgery.

    Still, I’m not a party-goer. My difficulties processing high volumes of sensory input keep me from activities such as sporting events and parties that involve large crowds, loud noises, and garish colors.

    Perhaps it’s a matter of energy or lack thereof. Much of the time, I struggle through debilitating fatigue and have nothing left for celebrations. When I am overtired, my deficits are exacerbated and vertigo returns in full force, my balance is precarious, my attention span is that of a gnat, I have trouble accessing vocabulary, and my headaches are crippling.

    I have to pace myself. I take one day at a time, shuffling through the bad brain days, enjoying the good days. Is that the best I can hope for? Is that seizing the day?

    Like my cancer-surviving friends, my life has changed dramatically. I travel much more than in my pre-injury days, to Colorado and New York, Israel, and Mexico. Always, wherever I go, I must seek out quiet spots to recover and regroup. But once my inner traffic jams clear up, I join in the fun, though at a slower pace.

    I do have more passion in my life—it comes to light in my teaching, in my writing, and in my need to make a difference in the world.

    Within a few months of my surgeries, I moved into a more central neighborhood. I am within walking distance of shops and restaurants. I no longer drive everywhere. My awareness, both of myself and the world around me, has grown; I am more in tune with my fellow human beings, better able to interact with my surroundings. I live more quietly. I take leisurely walks, stopping to absorb my surroundings. I play with my grand-dog, enjoying his antics. Life is harder but more fulfilling.

    Could my way also count as a celebration of life?

    It is a lovely day outside. I am well rested after a rare night of decent sleep. I slip on my jacket and head out for a stroll along the nearby river.

    This diem is definitely calling out to be carped, my way.

    Image via FabyGreen/Pixabay.

    in Brain Blogger on March 16, 2018 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Astronomers can’t figure out why some black holes got so big so fast

    Early supermassive black holes are challenging astronomers’ ideas about how the behemoths grew so quickly.

    in Science News on March 16, 2018 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Psychotherapy trainees’ experiences of their own mandatory personal therapy raise “serious ethical considerations”

    GettyImages-814596226.jpgBy Christian Jarrett

    Many training programmes for psychotherapists and counsellors include a mandatory personal therapy component – as well as learning about psychotherapeutic theories and techniques, and practising being a therapist, the trainee must also spend time in therapy themselves, in the role of a client. Indeed, the British Psychological Society’s own Division of Counselling Psychology stipulates that Counselling Psychology trainees must undertake 40 hours of personal therapy as part of obtaining their qualification.

    What is it like for trainees to complete their own mandatory therapy? A new meta-synthesis in Counselling and Psychotherapy Research is the first to combine all previously published qualitative findings addressing this question. The trainees’ accounts suggest that the practice offers many benefits, but that it also has “hindering effects” that raise “serious ethical considerations”.

    David Murphy and his colleagues at the University of Nottingham conducted a systematic review of the literature and found 16 relevant qualitative studies up to 2016, involving 139 psychologists, counsellors and psychotherapists in training who had undertaken compulsory personal psychotherapy as part of their course requirements. Most the studies involved interviews with the trainees about their experiences; the others were based on trainees’ written accounts.

    Murphy and his team identified six themes in the trainees’ descriptions. Some were positive. The trainees talked about how therapy had helped their personal and professional development, for example raising their self-awareness, emotional resilience and confidence in their skills. Personal psychotherapy also offered them a powerful form of experiential learning in which they got to see for themselves how concepts like transference play out in therapy, and they obviously experienced what it is like to be a client. They also learned about “reflexivity” – how to reflect on themselves and the way their own “self material” contributes to the dynamics of therapy.

    Another positive theme was therapeutic gains – some trainees saw their personal therapy as a form of “explicit stress management”; they said it helped them work through issues from their past; and also helped them to become their authentic selves, and accept their strengths and weaknesses.

    But the remaining themes were more concerning. The first – Do no harm – referred to the fact that many trainees spoke of the stress and anguish that the therapy caused them, and the way it affected their personal relationships. In some cases this left them feeling unable to cope with their client work (in which they were the therapist). Another theme – “Justice” – summarises the burden that trainees felt the mandatory therapy imposed on  them, in terms of time and expense, and the pressure of being assessed and of their lost autonomy.

    Finally, under the theme “Integrity“, the researchers said some trainees talked about how their therapist was unprofessional, yet it was difficult to change them; that they felt coerced into therapy and that the mandatory nature of it prevented them from truly opening up – in fact there was a sense of some trainees simply jumping through hoops in a functional way to complete their course requirement.

    Murphy and his team end their paper calling on regulatory and training institutions to consider the issues raised by their findings. Although the “hindering factors” they identified raise serious ethical issues, they believe that it may be possible to address them: “We envisage that programmes that attend to the points raised in this study will provide the best learning opportunities, compared with courses that do not regularly critically reflect upon, assess, and evaluate mandatory psychotherapy within the course.”

    A systematic review and meta-synthesis of qualitative research into mandatory personal psychotherapy during training

    Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on March 16, 2018 09:03 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    AI bests humans at mapping the moon

    AI does a more thorough job of counting craters than humans.

    in Science News on March 15, 2018 07:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    AI Psychosis

    The fragility of our minds and theirs

    Well with a title like that, what other picture could I possibly use? Credit: Pixabay

    We have fragile minds. Disorders of thought affect a large proportion of the population of rich countries at any one time; each person in these countries has an uncomfortably high probability of having at least one of these disorders of thought in their lifetime, peaking in early adulthood.

    These disorders come in many flavours, with many labels. Depression is common, as is anxiety. Addictions and compulsions too. More extreme are the darkness of obsessive compulsive disorders and the fragmentation of schizophrenia. They are uniquely human.

    Unknown is if they will remain uniquely human. Research in artificial intelligence is making spectacular progress; for many researchers, this progress is along the path to developing human-like general AI. This leads to a troubling thought: will a human-like AI inherit human-like disorders of thought?

    For disorders of movement that originate in the brain, we have some understanding of what happens: specific neurons get damaged, and specific movement problems result. In Parkinson’s disease, losing a small collection of dopamine neurons in the midbrain seems yoked to when tremors or difficulty in movement appear. In Huntington’s disease, loss of neurons in the striatum is directly linked to the disease’s tell-tale involuntary movements and spasms.

    Disorders of thought have yet to yield to simple mechanistic explanations. They seem rather a disorder of very large networks of neurons. Some might object that most disorders of thought have a mechanistic explanation, as they are disorders of neuromodulators – because most of their treatments change the amount of one or more neuromodulators within the brain. But all this tells us is that they have to be a network-wide problem, for neuromodulators are released all over the brain. And, as their name implies, neuromodulators change, but do not cause, the transmission of information between neurons. Even if changes in neuromodulation are at the root of thought disorders, their effect is played out in how they change the way neurons talk to each other.

    It is unclear why disorders of thought even exist. Are they inevitable in any sufficiently complex brain? If so, is this inevitability restricted to biology?

    How we answer that question depends on whether disorders of thought are due to

    (1) inherent flaws in biology,

    (2) the effects of culture, or

    (3) the inherent side-effects of large-scale complex networks of neurons.

    These are not mutually exclusive, as we’ll see.

    Inherent flaws in biology is the medical explanation. Brains are bags of cells, each of which is a bag of chemicals, and sits in a soup of other chemicals. A flaw in those chemicals, or the bags in which they sit, is a natural first target for how thought disorders arise. But innately flawed biology alone is unlikely to be the explanation. Such flaws would be selected against by evolution. Fish that can’t face another day of swimming won’t survive. Genetically inherited forms of thought disorders are rare. Instead, small differences in many genes can increase or decrease the probability of experiencing a thought disorder in a lifetime. Something then tips the balance from a probability to a reality.

    Thought disorders could thus be an inherent consequence of our cultures. Such a consequence could be the physical products we make: a pesticide, a solvent, a drug. The presence of particular manufactured chemicals in our environments has been linked to the increased probability of a few brain disorders; but thought disorders have apparently ancient origins, predating much of our industrial effluence.

    A more likely consequence of our culture is its sheer complexity. A well-rehearsed argument is that we are using our brains outside the niche in which they evolved. Our brains are subject to constant stressors in a society vastly bigger than they evolved within. Of hundreds of things to do, hundreds of things we are aware of, but have no individual control over: wars, famine, disease, climate change. Chronic stress affects the wiring of neurons, and changes how they respond to inputs. In this scenario, the genetic differences that increase or decrease the chances of a thought disorder are played out in the resilience they confer on our neuronal networks to these culturally driven changes.

    But a common factor to all disorders of thought is that they arise in the most complex network of neurons on Earth. We have 17 billion neurons in our cortex; no other animal comes close. We like to give our cortex credit for our apparently unique combination of talents, for language and writing and maths; for creativity and cooking. Indeed theories for how our cortex got to this outlier size see it as either cause or consequence of culture: either that such large, adaptable networks allow us to form large social groups, and develop language; or that forming large social groups and developing language drove the evolution of the largest cortex on Earth.

    We have a small part of cortex dedicated to the understanding and production of spoken language. Another part to written language. Loss of knowing how to speak sentences does not mean losing how to write the same sentences. Written language is an exceptionally recent event, too recent to have evolved a dedicated brain region to process it; that we have a brain region that deals with writing shows how our cortex is an adaptable, versatile machine.

    And therein lies the problem. Cortex is endlessly rewiring as new skills are learnt; as new memories are formed; as new knowledge is acquired; and as it continues to develop into early adulthood. In all that rewiring, there are chances of associations being formed between things that can’t exist, giving hallucinations; or associations that could exist, but are exceptionally unlikely, giving obsessive thoughts; or associations of bad outcomes with innocuous things, giving depression or anxiety. That the rewiring will lead to the activation of one set of neurons accidentally activating another set that are not relevant right now.

    Adaptability is not just rewiring. Cortex predicts what is about to happen. It predicts the next word in a sentence. It predicts when a sound follows a bright flash in the sky. And these predictions can grow too strong. They can write over the information from our senses. They can create events that are not happening, so hallucinating; can predict event that will never happen, depressing us.

    We seem to be the only species with a widespread and many-faceted array of things that can go wrong with our minds. We are also the only species with a 17 billion neuron cortex, containing trillions of connections. The two are plausibly linked.

    Contrast this with the current state of AI. We have witnessed remarkable progress. But at root we are still at the stage where one AI agent learns just one thing. A network learns to translate one language into another; or to classify a specific set of images; or to play Go or chess or draughts. While we have now reached the point where the same architecture, the same set of algorithms, may be used to solve different problems, the individual AI agent is still only learning one thing.

    Humans learn chess and Go and draughts; and learn multiple languages; and learn to paint. And learn sequences of events, predicting outcomes, good or bad. And do all this with one cortex. Which deals with many different types of learnt information, and many different uses of that information. In one densely complex network, ripe for malfunction.

    This line of argument suggests a “general” AI is not possible, or at best inadvisable. It suggests that a sufficiently complex network able to exhibit human-like abilities – to adapt to each new task, to make predictions, to learn, and to form memories – would also exhibit human-like frailties. That such AI would exhibit a range of disorders of thought, would have psychoses.

    The retort would be that, as we construct such AI ourselves, then we can engineer the networks we build to not fall prey to these frailties. That retort assumes we will have sufficient understanding of how these disorders arise in order to engineer around them. Patently, we do not have that at the moment, nor any indication that understanding is coming soon.

    A more refined retort may be that we need not follow the evolved brain slavishly, that we can find ways to have a complex network that can learn and do many different tasks without inheriting the design flaws of biology. In particular, that human disorders of thought seem dependent on neuromodulators, and AIs do not have them. One problem with this view is that it inherently believes neuromodulators are not doing computation. But it is all computation. A neuromodulator like serotonin changes the state of brain, by changing the strengths of connections between neurons. Neuromodulators have this role in both the tiniest nervous systems on Earth and in our cortex. We ought to assume they are necessary for being intelligent. It seems likely that AI will need something that mimics neuromodulation if it is to reach for general intelligence. For it is how real networks of neurons can be adaptively sculpted to change the problem at hand, by changing how they interact, both briefly and permanently.

    Another problem with this view is that many AI systems already use neuromodulation. Dopamine neurons change their firing to signal the difference between the outcome you expected and the outcome you got. This is the “prediction error” at the heart of many of the most spectacular recent AI demonstrations. It can drive wrong associations between actions and outcomes in AI networks just as easily as in neuronal networks.

    But say we did understand how these disorders come about. Then if they arise from anything other than pure inherent flaws in the biology, if they arise from our culture or are inherent side-effects of large densely-connected networks or both, then they cannot be engineered around. Such advanced AI would exist within our culture – one that is disengaged from it would not, by definition, be the mooted general intelligence. Such advanced AI would undoubtedly need large, complex networks, in which to learn and store many overlapping and different functions. Put this way, such advanced AI would seem just as vulnerable to thought disorders as us.

    This essay is not an answer, but a question. Because I want to know: will a network sufficiently complex to exhibit human-like intelligence also inevitably exhibit human-like disorders of thought?

    There are answers to the questions raised here. For example, if cell death and malfunction underlie every thought disorder, and those are always due to environmental stressors, then AI will be immune. In finding the answers to these questions, we will inevitably better understand the brain, and perhaps understand how to build a resilient, general purpose AI. A non-psychotic one. I think we’ve all seen enough sci-fi to agree: that would be a good thing.

    Read more on neuroscience at The Spike


    AI Psychosis was originally published in The Spike on Medium, where people are continuing the conversation by highlighting and responding to this story.

    in The Spike on March 15, 2018 07:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Ancient climate shifts may have sparked human ingenuity and networking

    Stone tools signal rise of social networking by 320,000 years ago in East Africa, researchers argue.

    in Science News on March 15, 2018 06:48 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    STEVE the aurora makes its debut in mauve

    A newly discovered type of aurora is a visible version of usually invisible charged particles drifting in the upper atmosphere.

    in Science News on March 15, 2018 05:15 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Why one journal will no longer accept author-suggested reviewers

    In a recent editorial, the Journal of Neurochemistry declared it would no longer accept author-suggested reviewers. While other journals have done the same in order to prevent fake reviews, the Journal of Neurochemistry is basing its decision on a different logic. We spoke with editor Jörg Schulz about why he believes relying on reviewers picked by editors helps … Continue reading Why one journal will no longer accept author-suggested reviewers

    in Retraction watch on March 15, 2018 03:38 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “The ‘1’ key was not pressed hard enough:” Did a typo kill a cancer paper?

    Errors in a 2017 paper about a new cancer test may have occurred because of a simple typo while performing calculations of the tool’s effectiveness. According to the last author, the “1” key was likely not pressed hard enough. The error, however small, affected key values “so greatly that the conclusions of the paper can … Continue reading “The ‘1’ key was not pressed hard enough:” Did a typo kill a cancer paper?

    in Retraction watch on March 15, 2018 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Liverwort reproductive organ inspires pipette design

    A new pipette is inspired by a plant’s female reproductive structure.

    in Science News on March 15, 2018 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    BMC at the 2018 Global Health Trials Conference, Nigeria

    The 2018 Global Health Trials Conference attracted over 350 registered participants and over 25 national and international speakers and facilitators. This year’s theme was ‘Collaborations, Networks and Partnerships for Health Research Conduct in Nigeria’. Topics discussed cut across multiple global health issues ranging from infectious diseases, communicable and non-communicable diseases, to education in global health and scientific communication.

    The keynote speaker, Prof Akin Abayomi, emphasized the need to harness local resources to fund research that prioritizes the needs of countries in Africa. He highlighted how slave trade, colonialism, neocolonialism and corruption had robbed the continent of its potential for growth and development; and how efforts needed to be invested in preventing Africa being left behind in the era of knowledge and technology development.

    Themes that emerged from the two days meeting included motivation of participants to disrupt the current norm of practice to be able to make advances with global health research. This includes questioning the convention, making choices to do things differently and thinking without the box. Such approaches make it possible for researchers in low and middle income setting to maximize possible outcomes from the collaborations with Northern based researchers offer, improves the potential values of South-South Collaborations, and pushes the frontiers of medical and global health education.

    At the meeting, I shared with participants the publishing opportunities BMC Series offers to research scholars from low and middle income countries. These include efforts to promote interesting stories. For instance a study in BMC Public Health – “Socially isolated individuals are more prone to have newly diagnosed and prevalent type 2 diabetes mellitus – the Maastricht study.” published in December 2017 has already featured on 130 news outlets worldwide.

    I shared my experience with handling of manuscripts as a section editor with the BMC Oral Health. I took participants at the workshop through the BMC peer review process reiterating that BMC focuses on publishing good quality research and not just innovative research. I highlighted that accepted research manuscript must ask a scientifically valid research question which should fill a gap in the existing knowledge, and be informed by previous research or clinical observations. The research must also use suitable data collection and data analysis methods.

    Prof William Brown, Prof Emeritus, University of Colorado School of Medicine, USA, complemented my talk by educating participants on critical considerations when determining the title of the manuscript, writing the introduction, methods, results, discussion, conclusion and abstracts of a manuscript. His talk focused on helping non-native English writers avoid various pitfalls when writing a scientific paper.

    It was nice to find out that lots of participants at the meeting were familiar with BMC. The greatest concern they had was the high article processing fee for researchers in low income countries with little or no access to research grants. I highlighted that the BMC is sensitive to this need and provides researchers with fee paying options which includes complete fee waiver when there is a justification made for such waiver. Inability to pay article processing fees is not a deterrent for publication of good quality manuscript at BMC.

    The post BMC at the 2018 Global Health Trials Conference, Nigeria appeared first on BMC Series blog.

    in BMC Series blog on March 15, 2018 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What do the public make of the “implicit association test” and being told they are subconsciously racist?

    GettyImages-836798276.jpgBy Christian Jarrett

    Many millions of people around the world have taken the “implicit association test (IAT)” hosted by Harvard University. By measuring the speed of your keyboard responses to different word categories (using keys previously paired with a particular social group), it purports to show how much subconscious or “implicit” prejudice you have towards various groups, such as different ethnicities. You might think that you are a morally good, fair-minded person free from racism, but the chances are your IAT results will reveal that you apparently have racial prejudices that are outside of your awareness.

    What is it like to receive this news, and what do the public think of the IAT more generally? To find out, a team of researchers, led by Jeffery Yen at the University of Guelph, Ontario, analysed 793 reader comments to seven New York Times articles (op-eds and science stories) about the IAT published between 2008 and 2010. The findings appear in the British Journal of Social Psychology.

    Crudely speaking, the readers could be divided into sceptics and believers. Among the former were those who felt the idea of implicit bias was an academic abstraction in a world of “real racism”. “To me the question of whether [unconscious] racism exists is almost irrelevant when 1 in 15 black adults and 1 in 9 black men between 20 and 34 is in jail,” wrote Nick. “It’s a shame so much time is spent pulling apart such … tiny bits of data … There are many, many examples of actual bias … ,” wrote Jonathan.

    Others expressed scepticism about their personal test results, and they often pushed back, arguing that the scientists behind the IAT have a political agenda. “We laugh at the religious for blindly following dogma and dismissing ‘science’. There is as much dogma in this test methodology and the conclusions its backers draw from it,” wrote Luke. An alternative, skeptical reaction was sardonic humour: “I am a white male in his mid-30s, yet I’m good. Even subconsciously! Yes!” wrote vkm.

    The reaction among believers in the validity and power of the IAT was very different from the sceptics, leading many to embark on what the researchers called “morally inflected soul searching”. For example, an Asian American voiced concern about her (according to the IAT) anti-white prejudice: “Somewhat more troubling to me is not my results, but that I almost feel proud of them, when my sense of right and wrong tells me I shouldn’t be … much as I wouldn’t be of an anti-black one,” wrote Iris.

    Others “confessed” to their implicit prejudice while make a virtue of their willingness to own up to their “guilt”: “I’m happy to own my implicit biases and glad to be made conscious of them,” wrote Jennifer. “I am open-minded enough to be introspective and search my soul for bias of which I might have been unaware previously …,” wrote Bob.

    As well as advertising their own wokeness, many of the fans of the IAT also criticised the test’s detractors. “Interesting that a number of these posts are angry,” wrote Laura. “Is this the response of defensive people who don’t want to get close enough to the truth of something to acknowledge it may have merit?” Or take John’s comment: “We should each go home and look in the mirror and recognize the ultimate Bad Guy – who ultimately must become the Good Guy to be the solution.”

    Yet another kind of reaction among believers in the IAT was to see implicit prejudice as a unavoidable aspect of being human, thereby absolving the test-taker of responsibility. “I do not believe we can ever get rid of racism and sexism from within ourselves. No amount of education on the importance of tolerance and equality can trump our biological instincts,” wrote David.

    The context of this research is that the IAT has become perhaps the most famous and widely taken modern psychological test, even though serious concerns have been raised about its reliability (take the test today and again tomorrow and you will probably find your results have changed) and validity (an individual’s score on the IAT does not tell you much, if anything, about how he or she is likely to behave in the real world). Despite these problems, the test and the concept of implicit prejudice now form the basis of compulsory diversity programmes for many employees.

    Arguably there is an important ethical discussion to be had about the fact of millions of people receiving test feedback of questionable meaning and how this might affect them (the current version of the IAT test site features a disclaimer that attempts to off-set these concerns) . However anyone hoping for a hard-hitting ethical critique of the IAT will not find it in this paper. Yen and his colleagues write that they have deliberately side-stepped these issues. “Rather, our objective has been to draw out the social implications of the science in relation to the changing context of prejudice discourse.”

    They added: “Our analysis provided a first demonstration of how this research and technology have begun to function in lay understandings of prejudice and public discourse”. In this sense, their findings make a novel and useful contribution, even though it is not clear how much New York Times readers are representative of public reaction more generally. Inevitably for research of this kind there will also have been a large dose of subjectivity in how the researchers parsed the hundreds of online comments.

    Yen’s team end on an optimistic note: “Both the idea of implicit bias and the practice of measuring it can … impact on the way people think of themselves, others, and their prejudices. They provide tools for talking about prejudice, for moral-psychological work on the self, for explaining social ills, and for mobilizing others to act in the interests of change.”

    ‘I’m happy to own my implicit biases’: Public encounters with the implicit association test

    Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on March 15, 2018 08:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Acknowledging identity, privilege, and oppression in music therapy

    As clinical music therapy professionals who are goal- and solution-oriented, how much time do we spend considering our client’s experience outside the therapy room? How might taking the time to learn about a client’s multifaceted identity affect the therapeutic relationship? Furthermore, how do our own personal identities, beliefs, and experiences affect our relationships with clients? In answering these questions, we begin to scratch the surface of making our practice more intersectional.

    Intersectionality is particularly important when considering the ways in which marginalized and oppressed identities are interlinked and how they create lived experiences that are different from those with privileged identities or “social statuses.”

    Theories of intersectionality emerged from U.S. Black feminist and women of color activist communities who saw themselves omitted from dominant movements for social justice, including feminism that foregrounded White women’s issues, as well as civil rights activism focused on Black men’s experiences. The original metaphor of traffic in an intersection, coined by Kimberlé Crenshaw in 1989, sought to describe how Black women’s identities often make them the target of multiple, simultaneous forms of oppression (e.g., racism and sexism).

    But intersectionality goes far beyond merely describing how people embody multiple social identities—it helps us understand how people are differently situated in society because of those identities.

    So, what does intersectionality have to do with music therapy? 

    Essential documents in the field of music therapy highlight the importance of being a culturally responsive clinician. For example, the AMTA Professional Competencies indicates a music therapist must “demonstrate awareness of the influence of race, ethnicity, language, religion, marital status, sexual orientation, age, ability, socioeconomic status, or political affiliation on the therapeutic process.” Furthermore, music therapists must demonstrate knowledge of and skill in working with culturally diverse populations. Thus, whether intentionally acknowledged or not, dynamic systems of privilege and oppression play a role in the therapeutic process and client-therapist relationship.

    Because these facets of identity consist of interwoven relationships, consideration of intersectionality is crucial to meet professional standards of practice. By acknowledging the interconnected pieces of an individual’s identity, we move away from the danger of creating harmful stereotypes or neglecting components of an individual’s identity that play crucial roles in the way they move through the world.

    Furthermore, as a profession we must consider the message our field may be sending if the identities of underrepresented and marginalized individuals are not reflected in the music therapists that serve them.

    How can we commit to intersectionality?

    Sociocultural considerations have historically been supplemental concerns within clinical research and practice; however, they really belong at the core. For example, undergraduate music therapy programs have traditionally included one course on multicultural music with the goal of helping students move towards cultural competency. Conversely, a truly intersectional approach would acknowledge that cultural differences extend far beyond just music and should be woven throughout the curriculum. Additionally, intersectional training would be sensitive to who is producing and represented in the curriculum and would insist upon inclusion of research done by and about individuals with marginalized identities (i.e., scholarship produced by and for people of color, individuals with disabilities, LGBTQ+ people, etc.).

    Furthermore, the idea of becoming culturally responsive (rather than culturally competent,) would be viewed as a life-long process of continual self-reflection and critical engagement with cultures that differ from one’s own versus, not a skill to be mastered.

    Attending to intersectionality requires we start listening to and mainstreaming voices that have been ignored in theory and research. These unheard voices are scholars of feminist theory, disability studies, critical race theory, queer theory, and burgeoning fields like transgender studies.

    Incorporating principles of critical theories in music therapy opens possibilities for progressive models of practice, such as “queer music therapy.” Even further, applying such approaches should involve continuous evaluation and refining. As in our discussion of striving toward continuous self-reflection and critical engagement with intersectionality in training and practice, research must be held to the same standard.

    Engaging with intersectionality

    It is essential for music therapists to actively engage with intersectionality in research and practice, with the ultimate goal of improving outcomes for all our clients. The only way for intersectionality theory to create any real change is to learn how to apply what we learn and begin to think more critically about putting intersectional principles into action.

    This can often be the most intimidating piece of working to improve our practice because it requires a great deal of cultural responsiveness, self-reflexivity, humility, vulnerability, and a willingness to unequivocally advocate for underrepresented voices within our client base and profession.

    However, the field of music therapy is due for a transformation—and it is likely not alone. Thus, here are some steps we’ve identified that clinicians and researchers can take in an effort to work towards the goal of moving towards more intersectional practice:

    • Read and engage with the texts of critical theory scholars and activists;
    • Start or join in critical dialogues with colleagues about how we can make the profession more representative of and affirming for members of marginalized and underrepresented groups;
    • Carry out and propel culturally responsive research, including through collaborations with members of underrepresented groups in the field; and
    • Insist upon anti-oppressive practice for marginalized clients.

    Featured image credit: Photo by Daniel van Beek. Used with permission.

    The post Acknowledging identity, privilege, and oppression in music therapy appeared first on OUPblog.

    in OUPblog - Psychology and Neuroscience on March 15, 2018 07:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dwarf planet Ceres may store underground brine that still gushes up today

    Waterlogged minerals and changing ice add to evidence that Ceres is geologically active.

    in Science News on March 14, 2018 10:38 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Is Human Adult Neurogenesis Dead? And Does It Matter?

    Does the human brain continue creating new neurons throughout adult life? The idea that neurogenesis exists in the adult human hippocampus has generated a huge amount of excitement and stimulated much research. It's been proposed that disruptions to neurogenesis could help to explain stress, depression, and other disorders. But a new study, published in Nature, has just poured cold water on the whole idea. Researchers Shawn F. Sorrells and colleagues report that neurogenesis ends in humans so

    in Discovery magazine - Neuroskeptic on March 14, 2018 08:14 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    New Horizons’ next target has been dubbed Ultima Thule

    NASA has named New Horizons spacecraft’s next target Ultima Thule after the public suggested tens of thousands of monikers for the Kuiper Belt object.

    in Science News on March 14, 2018 07:52 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Hospital admissions show the opioid crisis affects kids, too

    Opioid-related hospitalizations for children are up, a sad statistic that shows the opioid epidemic doesn’t just affect adults.

    in Science News on March 14, 2018 05:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Forget Pi Day. We should be celebrating Tau Day

    Pi Day may be fun, but it’s based on a flawed mathematical constant.

    in Science News on March 14, 2018 03:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Cancer biologist retracts five papers

    A cancer researcher based at The Ohio State University has retracted five papers from one journal, citing concerns about figures. The notices for all five papers state the Journal of Biological Chemistry raised questions about some figures, and the authors were not able to supply raw data in all instances. Four of the notices say … Continue reading Cancer biologist retracts five papers

    in Retraction watch on March 14, 2018 03:05 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Renowned physicist Stephen Hawking dies at 76

    Beyond his research contributions, Stephen Hawking popularized black holes and the deep questions of the cosmos.

    in Science News on March 14, 2018 02:06 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Handling Variability in Model Transformations and Generators

    The talk will be on Völter et al's paper on "Handling Variability in Model Transformations and Generators". Marco will provide an introduction to the topic and explain how it relates to his research.

    Paper abstract:

    Software product line engineering aims to reduce development time, effort, cost, and complexity by taking advantage of the commonality within a portfolio of similar products. The effectiveness of a software product line approach directly depends on how well feature variability within the portfolio is implemented and managed throughout the development lifecycle, from early analysis through maintenance and evolution. Using DSLs and AO to implement product lines can yield significant advantages, since the variability can be implemented on a higher level of abstraction, in less detailed models. This paper illustrates how variability can be implemented in model-to-model transformations and code generators using aspect-oriented techniques. These techniques are important ingredients for the aspectoriented model-driven product line engineering approach presented in [13].

    Date: 28/02/2018
    Time: 16:00
    Location: LB252

    in UH Biocomputation group on March 14, 2018 01:28 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    For this scientist, her name was her destiny

    Dr. Hasibun Naher of Bangladesh builds mathematical models to predict tsunamis and earthquakes

    in Elsevier Connect on March 14, 2018 01:10 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Stem cell researchers investigated for misconduct recommended for roles at Italy’s NIH

    Two stem cell scientists who left Harvard University in the aftermath of a messy misconduct investigation may have found new roles in Italy’s National Institute of Health. According to a document on the institute’s website, which we had translated, Piero Anversa and Annarosa Leri have been approved to start work at the Istituto Superiore di Sanità (ISS) … Continue reading Stem cell researchers investigated for misconduct recommended for roles at Italy’s NIH

    in Retraction watch on March 14, 2018 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Creatine and the Brain

    The human brain depends on a constant energy supply, which is needed for proper functioning. Energy supply impairments can jeopardize brain function and even lead to the pathogenesis or progression of neurodegenerative diseases. Chronic disruption of energy causes degradation of cellular structures and creates conditions that favor the development of Parkinson’s, Alzheimer’s, or Huntington’s disease. In addition, impaired brain energy metabolism is one of the important contributors to the pathogenesis of psychiatric disorders. Thus, interventions that can increase or regulate local energy stores in the brain might be neuroprotective and represent a good therapeutic tool for managing various neurological and neurodegenerative conditions.

    One of the potential therapeutic agents for restoring brain energy is creatine. Creatine is particularly important since it replenishes ATP (a cellular unit of energy) without relying on oxygen.

    Creatine is better known as one of the most popular supplements for bodybuilding. Being a completely natural compound, it has no negative effects and is commonly used by gym goers. Creatine is mostly stored in muscles where it serves as an easily available source of energy. But according to scientific findings, creatine also concentrates in the brain. It is an important component of the creatine kinase/phosphocreatine system that plays an important role in the metabolic networks of the brain and central nervous system and is involved in many of the brain’s functions. Experimental studies have indicated that creatine can protect from ischemic cell damage (which is caused by a lack of oxygen) by preventing ATP (energy) depletion and reducing structural damage to the affected brain cells.

    In spite of promising laboratory findings, investigation of creatine’s effects in the human brain has produced controversial results. So far, the studies on oral supplementation with creatine have demonstrated some benefits. For instance, one study in healthy young volunteers has shown that oral supplementation with creatine monohydrate for 4 weeks leads to a significant increase in the total creatine concentration in the participants’ brain, with the most pronounced rise seen in the thalamus. The fact that creatine concentrates in the brain after consumption clearly indicates that creatine can pass the blood-brain barrier, where the benefits of creatine supplementation for the brain can be expected.

    Another study has investigated the impact of creatine consumption on brain chemistry, including the brain’s high energy phosphate metabolism. After two weeks of creatine supplementation, the brain’s creatine level significantly increased, as well as the concentrations of phosphocreatine and inorganic phosphate.  This study clearly demonstrates the possibility of using creatine supplementation to modify high-energy phosphate metabolism in the brain. This is especially important for people with certain brain disorders as alterations in brain phosphate metabolism have been reported in depression, schizophrenia, and in cases of cocaine and opiate abuse.

    The effects of creatine supplementation in another human study demonstrated that creatine can improve cognitive performance during oxygen deprivation. The participants in this study received creatine or placebo for seven days and were then exposed to a hypoxic gas mixture. In comparison to the placebo group, supplementation with creatine helped to restore cognitive performance, especially attention capacity that was affected by hypoxia. Also, creatine helped to maintain an appropriate neuronal membrane potential in brain cells. This research has demonstrated that creatine can be a valuable supplement when energy provision by cells is jeopardized. In addition, it supports the idea that creatine is beneficial not only for recovering muscle strength but for restoring the brain function too.

    Approximately half of the daily requirement (around 3–4 grams) for creatine comes from alimentary sources, while the other half is endogenously produced in the body. Creatine is a carninutrient, meaning that it is available only from animal foods (mostly meat). Since creatine is not present in plant-based foods, plasma and muscle levels of creatine are commonly lower in vegetarians and vegans compared to omnivores. Thus, individuals whose diet is based on plant foods may benefit from creatine supplementation in terms of improvements in brain function. One study in young adult females investigated the impact of creatine supplementation on cognitive functions in both vegetarians and omnivores. Compared to the placebo group, 5 days of supplementation with creatine led to significant improvements in memory. This improvement in brain function was more pronounced in vegetarians. Another study investigated the effects of 6-week-long creatine supplementation in young vegetarians. In comparison with placebo, creatine-induced significant improvements in intelligence and working memory, with both functions depend on the speed of information processing. This study showed that brain performance is dependent on the level of energy available in the brain, which can be beneficially influenced by creatine supplementation.

    Creatine supplementation seems to be beneficial not only for healthy people but also for individuals with psychiatric disorders. For instance, decreased creatine levels have been reported in the brains of patients with anxiety disorders. Post-traumatic stress disorder (PTSD) is a type of anxiety condition that develops in subjects that have experienced traumatic situations. Creatine supplementation was shown to be beneficial in treatment-resistant PTSD patients in relief from symptoms and improved sleep quality.

    Furthermore, studies of creatine functions in the central nervous system underline creatine’s therapeutic potential in neurodegenerative diseases, since creatine supplementation can reduce the loss of neuronal cells. Also, animal model studies have demonstrated that the size of creatine stores in the brain play an important role in Alzheimer’s disease, and creatine supplementation was found to be beneficial in animal models of Parkinson’s disease as well, a rationale for using creatine in these conditions.

    To sum up, it seems that creatine can be used as a supplement for replenishing the brain’s energy stores. This can further improve cognitive functions and brain performance, with the effects more pronounced in vegans and vegetarians. In addition, creatine has therapeutic potential in psychiatric disorders and neurodegenerative conditions.


    Turner, C.E., Byblow, W.D., Gant, N. (2015) Creatine supplementation enhances corticomotor excitability and cognitive performance during oxygen deprivation. Journal of Neuroscience. 35(4): 1773-1780. doi:10.1523/JNEUROSCI.3113-14.2015

    Dechent, P., Pouwels, P.J., Wilken, B., Hanefeld, F., Frahm, J. (1999) Increase of total creatine in human brain after oral supplementation of creatine-monohydrate. American Journal of Physiology. 277(3 Pt 2): R698-R704. PMID:10484486

    Lyoo, I.K., Kong, S.W., Sung, S.M., Hirashima, F., Parow, A., Hennen, J., Cohen, B.M., Renshaw, P.F. (2003) Multinuclear magnetic resonance spectroscopy of high-energy phosphate metabolites in human brain following oral supplementation of creatine-monohydrate. Psychiatry Res. 123(2): 87-100. PMID:12850248

    Brosnan, M.E., Brosnan, J.T. (2016) The role of dietary creatine. Amino Acids. 48(8): 1785-1791. doi:10.1007/s00726-016-2188-1

    Benton, D., Donohoe, R. (2011) The influence of creatine supplementation on the cognitive functioning of vegetarians and omnivores. British Journal of Nutrition. 105(7):1100-1105. doi:10.1017/S0007114510004733

    Rae, C., Digney, A.L., McEwan, S.R., Bates, T.C. (2003) Oral creatine monohydrate supplementation improves brain performance: a double-blind, placebo-controlled, cross-over trial. Proceedings. Biological Sciences.  270(1529): 2147-2150. doi:10.1098/rspb.2003.2492

    Andres, R.H., Ducray, A.D., Schlattner, U., Wallimann, T., Widmer, H.R. (2008) Functions and effects of creatine in the central nervous system. Brain Research Bulletin. 76(4): 329-343. doi:10.1016/j.brainresbull.2008.02.035

    Image via TheDigitalArtist/Pixabay.

    in Brain Blogger on March 14, 2018 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What role does branding play in the smoking experience?

    In a study recently published in BMC Public Health, Melanie Wakefield et al. from the Cancer Council Victoria employed a within-subjects design to test their hypothesis that the presence of a premium brand name would enhance perceived smoking experience and result in higher purchase intent, compared to when that same cigarette was presented with the brand name masked. The study was carried out in 2015, approximately 2.5 years after the introduction of plain packaging in Australia.

    Using a sample of 75 Australian smokers aged 18-39 years, the researchers asked participants to smoke (take four puffs of) two identical cigarettes (1) with the brand variant name (branded) and (2) without the brand variant name (masked), the order of which was randomized. Only premium brands/more expensive mainstream brands were chosen and participants were required to be familiar with one of the eligible brand variants in order to be selected for the study.

    “Masked” cigarettes were presented to participants on a plain white, ceramic dish whereas “Branded” cigarettes were presented to participants in their premium/upper-mainstream branded pack. All packs displayed the same “Smoking causes blindness” health warning in circulation at the time of the study.

    Participants perceived that the branded cigarette tasted better and was less stale than the masked cigarette. Furthermore, fewer participants reported that they would be likely to purchase the masked cigarette compared to the branded cigarette.  The authors also found that expected enjoyment of the brand variant and objective enjoyment of the cigarette (assessed through the masked condition) both significantly predicted perceived enjoyment of the cigarette when the brand variant name was known. However, objective cigarette quality did not predict perceived quality when the brand variant name was known.

    The results of the study indicate that, even in a plain packaging marketplace, branding can still influence smokers’ perceptions and that smokers may still associate particular brands with advertising even after the introduction of plain packaging. Countries which are considering the implementation of plain packaging should therefore also consider the effects of the remaining brand variant names.

    The researchers caution that the high proportion of participants who were not considering quitting in the next 6 months may have influenced the study results, as it is possible that smokers with no desire to quit have more favorable expectations of cigarette brands. Furthermore, participants were only asked to take four puffs of each cigarette within a single testing session. Future studies which allow smokers to experience the full cigarette, in their own time and in their regular environment, may clarify findings and strengthen conclusions.

    The post What role does branding play in the smoking experience? appeared first on BMC Series blog.

    in BMC Series blog on March 14, 2018 09:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Investigating the “STEM gender-equality paradox” – in fairer societies, fewer women enter science

    Screenshot 2018-03-14 08.41.46.pngThe percentage of women with STEM degrees is lower in more gender-equal countries, as measured by the WEF Gender Gap Index. Image from Stoet & Geary, 2018.

    By Alex Fradera

    The representation of women in STEM fields (science, technology, engineering and maths) is increasing, albeit more slowly than many observers would like. But a focus on this issue has begun throwing up head-scratching anomalies, such as Finland, which has one of the larger gender gaps in STEM occupations, despite being one of the more gender equal societies, and boasting a higher science literacy rate in its girls than boys. Now a study in Psychological Science has used an international dataset of almost half a million participants that confirms what they call the “STEM gender-equality paradox”: more gender-equal societies have fewer women taking STEM degrees. And the research goes much further, exploring the causes that are driving these counterintuitive findings.

    Gijsbert Stoet at Leeds Beckett University and David Geary at the University of Missouri analysed several large and often publicly available datasets, like the gender inequality measures taken by the World Economic Forum (WEF; based on metrics like women’s earnings, life expectancy and seats in parliament) and UNESCO data on STEM degrees.

    The researchers found the percentage of women STEM graduates is higher for countries that have more gender inequality. For instance, countries like Tunisia, Albania and Turkey, which come out the poorest on the WEF gender equality measures, see women making up 35-40 per cent of STEM graduates, whereas in countries with more gender equality, like Switzerland and Norway, the figure is lower at around 20 per cent, similar to Finland.

    To better understand the STEM gender-equality paradox, Stoet and Geary accessed results from a 2015 OECD educational survey of the science literacy and attitudes to science of 15 to 16-year-old students from 67 countries. Objectively, neither boys or girls were more scientifically literate – girls were better in 19 countries, boys in 22, with no difference in the others.

    These survey results suggest it is not girls’ lack of scientific knowledge or negative attitudes toward science that holds them back. It’s possible, however, that girls might match or outperform boys in science lessons in some countries and still be making a rational choice to avoid STEM routes because they outperform boys even more in other areas (it’s well documented that girls outperform boys at school on many topics, on average).

    Using the OECD survey, Stoet and Geary calculated a personal ranking for each student of their relative ability across the three main areas of maths, science and reading. In all but two of the 67 nations, boys more than girls were personally stronger in science (80 per cent of boys were personally strongest in either maths or science; in contrast, half of girls were personally strongest in reading). Stoet and Geary found that boys tending to be personally stronger in science was most apparent in more gender-equal countries, the very countries where boys go on to pursue more STEM careers. This smooths some of the kinks in the STEM gender-equality paradox: in fairer societies, boys seem to be optimising their future by pursuing science-like activities, whereas girls have other options on the table.

    However, questions remain. For instance, why should a motivated and academically talented teenage girl forego a science route because her reading slightly outperforms her other literacies? After all, reading and writing skills are also beneficial to scientists, from paper writing to fundraising.

    Stoet and Geary tried to address this question by focusing on gender differences in motivation, in terms of interest, confidence and enjoyment. In 60 per cent of countries, boys showed more interest in science than girls (in various fields, from disease prevention to energy), and the gender-gap in scientific interest was greater in the most gender-equal countries; boys also expressed more confidence in their science abilities in 39 of 67 countries, especially the gender-equal ones. And even though girls reported enjoying scientific activities more than boys in two-thirds of the countries, boys’ enjoyment was higher in the more gender-equal countries.

    Why are boys most enthusiastic, interested, and personally strongest at science in more gender-equal societies? The authors suggest that in highly stable countries with strong welfare systems, people can pursue their calling and unlock their personal potential, building their future around their genuine interests and personal strengths. This echoes the finding popularised in online lectures by psychologist Jordan Peterson that sex-related personality differences are higher in gender-equal societies – when societies’ social pressures are less tyrannical, individual tendencies can be expressed more freely.  In more repressive cultures, by contrast, young people are liable to prioritise pragmatism – food on the table – over self-actualisation, and as STEM jobs tend to be stable and well-paid, that would encourage more female representation. Consistent with this, Stoet and Geary used a United Nations life satisfaction measure as a proxy for cultural stability and found that more women took STEM degrees in countries where life satisfaction is lower – which tended to be in the unequal societies.

    These findings suggest we need a nuanced approach to the sticky issue of gender and participation in science. Firstly, there is no question from this data that objectively, young women across the globe are just as capable to tackle scientific subjects as are young men. And even after taking into account the gender differences in science attitudes and personal strengths, the researchers calculated that, in a society where women’s rational preferences led directly to their level of STEM participation, we should see women take 34 per cent of STEM degrees, while the actual global average is 28 per cent – so other factors unaddressed in this study are clearly leading women away from science roles. So the study doesn’t suggest that we sit on our laurels and validate the status quo.

    It does suggest, however, that we misunderstand the current level of STEM gender imbalance if we attribute it entirely to social injustice. A substantial cause of the current STEM gender mix may be the product of young men and women making considered, rational choices to leverage their strengths and passions in different ways.

    The Gender-Equality Paradox in Science, Technology, Engineering, and Mathematics Education

    Alex Fradera (@alexfradera) is Staff Writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 14, 2018 08:53 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Scientists Illuminate Mechanism at Play in Learning

    New research illuminates complex molecular network involved in learning.

    in OIST Japan - CNU - Eric De Schutter on March 14, 2018 01:22 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Cosmic dust may create Mars’ wispy clouds

    Magnesium left by passing comets seeds the clouds of Mars, a new study suggests.

    in Science News on March 13, 2018 09:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Hey Brain Matters, I am a Neuroscience undergrad at ASU who is starting a podcast with fellow undergrads to talk about everything psychology and neuroscience. We are trying to focus on topics that deal with heuristics and delivering neuroscience as thought provoking and accessible to all audiences. Dr. B B Braden referred me to your site and I have gotten addicted listening to your episodes. I would be grateful if you could share some advice on how to start our on podcast. Thanks so much, Milo.

    Hey Milo - Thanks for listening. I’d be happy to give you some advice on how we got started. Shoot me an email at anthony@brainpodcast.com and I can hopefully provide some help.


    in Brain matters the Podcast on March 13, 2018 06:03 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Brain waves may focus attention and keep information flowing

    Not just by-products of busy nerve cells, brain waves may be key to how the brain operates.

    in Science News on March 13, 2018 05:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dino-bird had wings made for flapping, not just gliding

    Archaeopteryx fossils suggest the dino-birds were capable of flapping their wings in flight.

    in Science News on March 13, 2018 04:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Altmetrics reveal insights into the impact of scientific knowledge

    Want a higher h-index? Maybe you should be spending more time on Twitter

    in Elsevier Connect on March 13, 2018 03:20 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Journals retract 30 papers by engineer in South Korea

    An engineer in South Korea has lost 30 papers, at least seven of which for duplication and plagiarism. He has also been fired from his university position. Soon-Gi Shin, whose affiliation was listed as Kangwon National University in Gangwon, is the sole author on the majority of the papers, published in four journals between 2000 … Continue reading Journals retract 30 papers by engineer in South Korea

    in Retraction watch on March 13, 2018 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The retraction process needs work. Is there a better way?

    Retractions take too long, carry too much of a stigma, and often provide too little information about what went wrong. Many people agree there’s a problem, but often can’t concur on how to address it. In one attempt, a group of experts — including our co-founder Ivan Oransky — convened at Stanford University in December … Continue reading The retraction process needs work. Is there a better way?

    in Retraction watch on March 13, 2018 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The dramatic increase in the diagnosis of ADHD has not been accompanied by a rise in clinically significant symptoms

    GettyImages-687790406.jpgBy guest blogger Helge Hasselmann

    Across the globe, ADHD prevalence is estimated around 5 per cent. It’s a figure that’s been rising for decades. For example, Sweden saw ADHD diagnoses among 10-year olds increase more than sevenfold from 1990 to 2007. Similar spikes have been reported from other countries, too, including Taiwan and the US, suggesting this may be a universal phenomenon. In fact, looking at dispensed ADHD medication as a proxy measure of ADHD prevalence, studies from the UK show an even steeper increase.

    Does this mean that more people today really have ADHD than in the past? Not necessarily. For example, greater awareness by clinicians, teachers or parents could have simply captured more patients who had previously had been “under the radar”. Such a shift in awareness or diagnostic behaviour would inflate the rate of ADHD diagnoses without necessarily more people having clinical ADHD. However, if this is not the true or full explanation, then perhaps ADHD symptoms really have become more frequent or severe over the years. A new study in The Journal of Child Psychology and Psychiatry from Sweden with almost 20,000 participants has now provided a preliminary answer.

    The researchers, led by Mina Rydell at Karolinska Institutet, examined data from participants in an ongoing study of all twins in Sweden that started in 2004 and aims to study their physical and mental health, with various measures taken the year that the children turn nine years of age.

    Specifically, the researchers analysed A-TAC (Autism-Tics, ADHD and other Comorbidities Inventory) scores from 19,271 children from 9,673 families recorded between 2004 and 2014. The A-TAC is a telephone-based interview in which parents are quizzed about their kids’ behaviour and mental health, including sub-scales focused on attention deficits and hyperactivity. The questions are about symptoms with no mention of diagnostic categories and the wording has stayed the same over the years. A typical question is “Does he/she have difficulties keeping his/her hands and feet still or can he/she not stay seated?”.

    The researchers used the A-TAC scores to classify the proportion of children in different years with diagnostic-level ADHD or subthreshold ADHD or no ADHD. Important to keep in mind here is that instruments like the A-TAC are restricted to assessing the severity of certain symptoms and cannot be used to diagnose children with ADHD (only clinicians and mental health experts can diagnose someone). For example, if a child fell in the diagnostic-level ADHD category, it would mean that the severity of his or her ADHD symptoms would likely result in a diagnosis by a specialist, but this couldn’t be known for sure. The authors calculated the changes in these categories, as well as in mean A-TAC scores, over time by comparing results from the parent interviews conducted in 1995-1998, 1992-2002 and 2003-2005.

    Across the 10-year study period, 2.1 per cent of all participants (n=406) showed diagnostic-level ADHD and 10.7 per cent (n=2,058) showed subthreshold level ADHD. Interestingly, there was no statistically significant increase in diagnostic-level ADHD prevalence over time, fluctuating around 2 per cent in most years. On the other hand, the prevalence of sub-threshold ADHD increased significantly from 2004 to 2014, when at 14.76 per cent it reached its peak. Mean ADHD scores and inattention/ hyperactivity-impulsivity sub-scale scores also showed a similar increase from 1994 to 2004.

    These symptom changes over time are probably not due to changes during the study in the status of the twin families who agreed to take part in the research and those who didn’t. The researchers accessed the National Patient Register and this showed that while participants in the twin study differed from non-participants in terms of having fewer ADHD diagnoses, this difference did not change over the years of the study, suggesting that it was unlikely to explain the results. Perhaps most important, the National Patient Register showed that prevalence of clinician-diagnosed ADHD had increased more than fivefold from 2004 to 2014, which is inconsistent with the fact that the twin study found diagnostic-level ADHD prevalence did not see a similar rise.

    So while the diagnosis rates of clinical ADHD increased during the period of the study, the findings from the twin study suggest that only milder forms of ADHD-related symptoms became more frequent across the population during the same years. This suggests that the number of people who have such severe ADHD symptoms that it merits a diagnosis has actually remained stable, and that other factors are more probably the driving force behind an increased ADHD prevalence. While speculative, these could be related to changes in awareness among parents, teachers or clinicians; societal or medical norms; or better access to healthcare.

    There are several caveats that need to be kept in mind when interpreting these findings. For example, as mentioned, A-TAC relies on parents’ reports, which might not be the most adequate source of information. In fact, a diagnosis of ADHD requires symptom impairments in at least two different contexts, such as at school or at home. Because only twins were enrolled in CATSS, it is also not clear whether these results also apply to only children. A similar argument could be made about the age of the participants.

    Keeping its limitations in mind, this study highlights an important point by providing an alternative explanation for rising ADHD diagnoses. This demonstrates the effects that shifts in societal, political or medical opinion can have on the “prevalence” of an illness. Considering that more diagnoses are likely to go hand in hand with more (potentially unnecessary) medication, this study provides food for thought to clinical and political decision-makers.

    Has the attention deficit hyperactivity disorder phenotype become more common in children between 2004 and 2014? Trends over 10 years from a Swedish general population sample

    Post written for BPS Research Digest by Helge Hasselmann. Helge studied psychology and clinical neurosciences. Since 2014, he is a PhD student in medical neurosciences at Charité University Hospital in Berlin, Germany, with a focus on understanding the role of the immune system in major depression.

    in The British Psychological Society - Research Digest on March 13, 2018 09:09 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Cells and Synapse

    OIST Computational Neuroscience Unit
    13 Mar 2018

    An electron micrograph image shows a parallel fiber-Purkinje cell. The presynaptic cell, a parallel fiber, is colored red while the postsynaptic cell, a Purkinje cell, is colored green.

    in OIST Japan - CNU - Eric De Schutter on March 13, 2018 08:03 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Model of LTP and LTD

    OIST Computational Neuroscience Unit
    13 Mar 2018

    Learning is thought to be a balance between two processes that act as a kind of molecular dial: long-term potentiation (LTP), in which the connection between two neurons is strengthened, and long-term depression (LTD), in which the connection between two neurons is weakened. Such a large, comprehensive model allows scientists to examine how complex signaling systems work together.

    in OIST Japan - CNU - Eric De Schutter on March 13, 2018 08:02 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Eric De Schutter and Andrew Gallimore

    13 Mar 2018

    Researchers Eric De Schutter and Andrew Gallimore have modelled the molecular basis of learning in the cerebellum.

    in OIST Japan - CNU - Eric De Schutter on March 13, 2018 08:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The science of sleep: Part I

    This post is gonna be about a passion of mine. Something I can indulge in anytime, anywhere, for any amount of time. Something I would choose over diamonds and Michelin restaurants. I now realise the punchline of this joke would have been better if the title hasn’t already disclosed that it’s gonna be around sleeping. Oh well.

    Sleep, as mysterious as it is necessary, still puzzles scientists -- and they have been researching it for decades (and people have been sleeping for thousands of years). What exactly happens when we sleep? Why do we do it? Can we stop sleeping at all? What happens then? Got you curious enough? Then read on and find out how close we are to answering all of these questions.


    1. Some theoretical background: the what’s and when’s of snoozing

    Everyone has heard about REM sleep: either because of its connection to dreams or because of how important it is or just because you liked “Losing My Religion”. But there is more to sleep: it also contains three non-REM stages with different characteristics and functions (and differing probabilities you will be grumpy when woken up during this stage).

    Stage 1 sleep (Non-REM1) is the drowsy sleep phase when you drift between waking and sleeping. Your muscles are not fully inhibited yet and you can experience the “I’m falling”-sensation, a sudden muscle contraction which is called “myoclonic jerk” (some scientists have assumed it might stem from brains of our primate ancestors making us confuse muscle relaxation with falling from a tree1). Moreover, there is a change in your brain waves, synchronised electrical pulses resulting from tons of neurons communicating with each other (see Fig. 1 for a visualisation of all the possible brain waves). In a waking state your brain produces a lot of brain waves called beta and gamma. Both are rather jerky and high-frequency and are either connected to concentration (beta) or theorized to play a role in creating consciousness (gamma). In this first sleep stage instead of beta and gamma your brain starts showing slower and more synchronized alpha waves (associated with relaxation and peacefulness) and even more slower theta waves (associated with deep relaxation and daydreaming) -- keyword: slowing down. This stage lasts between 1 and 10 minutes. 2, 3

     Fig 1. A short intro into what brain waves happen in your head.

    Fig 1. A short intro into what brain waves happen in your head.

    Stage 2 sleep (Non-REM2): Here, your consciousness has drifted away. Heart rate and breathing slow down, temperature decreases, you prepare to enter the deep sleep and theta waves are still very prominent. This stage together with the previous one comprise what is known as “light sleep”. We spend most of our (around 45% of the night) snoozing time in this stage. Light sleep is the best phase to wake up in as you won’t feel groggy or disoriented but rather refreshed and ready to take on the day.2, 3

    Stages 3+4 sleep (Non-REM 3 & 4): This is when things get serious: the deep sleep stage. It is also called slow-wave-sleep because -- you guessed it -- the brain waves slow down and become larger. Now delta waves, the slowest one your brain can produce, rule the party. You are unresponsive to any outside sounds, hard to wake up and your muscles are completely relaxed. This sleep stage is called restorative as your tissue gets repaired, energy gets restored, your kidneys clean your blood, you get the picture.2, 3 Waking up during a deep sleep phase leads to this state (a scientifically proven fact, no citation needed).

    giphy (2).gif

    REM sleep + sleep paralysis: The arguably most intriguing feature of sleep -- dreams -- mostly happen during this stage, being vivid and elaborate. The defining feature is the random and rapid side-to-side eye movements. The purpose of these movements is not completely clarified yet (what in neuroscience is, after all?), but theories include scanning of the scenes we see in our dreams4, 5, 6 and memory formation 7, 8, 9 (we’re gonna talk more about the sleep functions later). Your blood pressure and breathing rate rise to almost the waking level and your brain waves resemble waking state too -- there are even high-concentration beta-waves present!2 Due to all this weird stuff REM sleep has earned the title of “Paradoxical Sleep”. A bit scary but worth knowing is that your muscles become completely paralyzed during this stage -- neurotransmitters called GABA and glycine prevent your muscles from receiving brain signals and protect you from acting out on your dreams and potentially hurting yourself10. So basically we’re just laying there completely paralysed while our eyes uncontrollably dart from side to side. Lovely. Scientists believe that when the transition from and to REM sleep doesn’t go smoothly it may lead to sleep paralysis -- a frightening state when you’re already aware but still can’t move your body 11 . In this limbo between wakefulness and vivid dreams people often report seeing scary stuff which can be mostly categorized as either an incubus (you experience chest pressure and problems breathing which you might perceive to be caused by a demonic entity sitting on your chest), an intruder (you sense an unwanted presence of some fearful creature) or an out-of-body experiences 12 . This might explain a lot of reports of seeing paranormal activity 13 or being abducted by aliens 14 (sorry, Agent Mulder!). Approximately 7,6% of the general population suffers from sleep paralysis, while the rate increases drastically for students, 28% of whom have reported experiencing it. 15
    During an average night, you would go through several complete sleep cycles (one cycle lasting ca. 90 minutes) with REM stages getting longer towards morning.

     A short overview of what is going on during the night. As you can see, slow wave sleep dominates the beginning of the night while REM sleep becomes more prominent in the second half of the sleeping period.

    A short overview of what is going on during the night. As you can see, slow wave sleep dominates the beginning of the night while REM sleep becomes more prominent in the second half of the sleeping period.

    2. How does your brain fall asleep?

    It is impossible to pinpoint the exact moment of falling asleep. One moment you are still going through all the dumb stuff you did five years ago and another moment you’re already drifting away towards the second sleep stage. So what happens in your brain when you fall asleep?

    There is a tiny thing deep in your brain called suprachiasmatic nucleus (SCN) which is the mastermind behind our 24-hour sleep-wake cycle. Directly from your eyes it receives information about how much light exposure you’re getting. It uses this information to reset your internal clock to correspond to the normal day-night cycle. In turn, the internal clock accordingly regulates multiple bodily functions, such as temperature, hormone release and, interestingly for us, sleep and wakefulness 16 . Interestingly, even in a complete absence of light, our internal clock still functions in a roughly 24-hour rhythmus. 17, 18 It has been found that it is due to a cyclical activity of certain genes (fittingly called “clock genes”) 19 . These genes produce different levels of various "clock proteins" depending on the time of the day -- and these proteins then regulate your daily rhytmus (such as what your body tempreture is, how much melatonin is secreted, how alert you are etc etc).
    SCN is intricately connected to the -- prepare for another long name -- ventrolateral preoptic nucleus (VLPO) 20 -- a structure which is active during sleep 21 . These connections are thought to activate VLPO and thus to promote the onset of sleep -- ‘cause when VLPO neurons are activated, they release inhibitory chemicals (called GABA and galanin) which, in turn, suppress our arousal system. So through a long chain of command a switch is turned and your arousal slowly goes to zero. VLPO neurons can also be activated by a chemical called adenosine. Adenosine builds up during the day after glycogen, the body’s principal store of energy, breaks down, and after enough of it has accumulated it starts to promote tiredness and nudge you towards a more restful state. 22, 23 This is called a homeostatic regulation as the brain strives to balance out the tiredness that builds up with some rest.

     A short overview of the chain of command. From https://www.medscape.org/viewarticle/703161_2

    A short overview of the chain of command. From https://www.medscape.org/viewarticle/703161_2

    Another sleep related chemical is the one you will see as a supplement in a supermarket: melatonin. It is produced by the pineal gland and its production, as so many other things, is regulated by our circadian clock. When the sun goes down, SCN commands the pineal gland to start producing melatonin (whose levels are barely detectable during the day) which is then released into your bloodstream and induces sleep. Recently, scientists have been warning us against using smartphones, TVs and other kinds of light-emitting devices before bad as it is supposed to mess with our melatonin levels. The light from our electronic devices has a much higher concentration of blue light than the natural light does, and this treacherous blue light suppresses the melatonin production more than any other wavelength 24, 25 . This throws off the sleep-wake cycle and can lead to a poorer quantity and quality of sleep as the brain is confused about what time of the day it is right now. So do yourself a favor and read a book before sleep instead. Or do some mindfulness meditation. Or have sex. Anything without a blue light.

     Suppressed melatonin production due to the bright light exposure vs. high melatonin production when wearing goggles blocking out blue light (and obviously high melatonin production when only being subjected to the dim light).&nbsp; From https://academic.oup.com/jcem/article/90/5/2755/2836826

    Suppressed melatonin production due to the bright light exposure vs. high melatonin production when wearing goggles blocking out blue light (and obviously high melatonin production when only being subjected to the dim light).  From https://academic.oup.com/jcem/article/90/5/2755/2836826

    3. Why do we sleep? Why is sleep important?

    This is a really good question. And unfortunately there is no definite answer. As William Dement, the founder of Stanford Sleep Research Center said, "As far as I know, the only reason we need to sleep that is really, really solid is because we get sleepy." So… Let’s see what we already know (besides this grain of wisdom).


    The romance between sleep and memory consolidation (that is, its stabilization) has long been suspected and numerous studies in the last decades, if not even centuries, have solidified it (but none have set the record completely straight). There is a distinction between two memory types: declarative (fact-based information, “what”-memories) and procedural (“how”-memories, like a muscle memory of how to drive a bike or play a new song on a guitar). It would be very convenient to have a clear distinction like “slow wave sleep is responsible for this and REM is responsible for that”, but unfortunately the reality is a less clear-cut mess.

    Generally, sleep helps memory: people who sleep after learning something tend to remember this newly learnt stuff better than their counterparts who stayed awake after the learning session. 26 Learning word lists 27, 28 , complex finger moving skills 29, 30 or even gaining insight into complex hidden rules 31 : all that benefited from a sleeping session following the learning.

    Slow wave sleep (SWS), dominating the first part of the night, was theorized to help specifically with the consolidation of declarative memories 32, 33, 34 . The process behind the stabilisation of newly acquired memories is believed to be their reactivation in hippocampus, our memory center, during sleep. By “replaying” the memories their traces would become more stable and less likely to perish away 35, 36 . One study found that if you learned something while smelling rose odour and then were exposed to the same smell during your SWS sleep, then your hippocampal activity increased and your memories next day were stronger 37 . So,

    1. Stronger hippocampal reactivation during SWS.
    2. Better memories.
    3. ???
    4. Profit!

    REM sleep, on the other hand, was associated with procedural memory whose consolidation does not depend on hippocampus (but rather on rehearsing the movement commands in parts of the brain concerned with muscle control, such as cerebellum, basal ganglia and motor cortex). 38, 39, 40 Not that much is known about the exact consolidation mechanisms of this kind of memory so we’re gonna keep this paragraph short. However, there are studies speaking against such a clear cut distinction (or if you wanna be scientific, against “dual-process hypothesis”). For instance, it was shown that SWS sleep can also help to consolidate memories for movements (=procedural) 41, 42 , whereas REM sleep had some part in stabilizing memories about events and facts 43, 44 . A not so clear separation of responsibilities after all, it seems. This rather indicates that both stages are important for both types of memory (this theory is called “sequential hypothesis”, again, if you wanna be scientific): they complement each other rather than compete. It just so happens that one stage (SWS) might contribute more to one type of memory (declarative) and vice versa.

    But of course this is not the end of the story. There is -- surprise -- another theory trying to describe how memories are consolidated (oh boy, was it fun to learn for the exam on memory). It is called “synaptic homeostasis” and it basically says that when you’re awake and you acquire all the new memories and experiences, the connections between your brain cells (=synapses) get stronger (and even new ones are created) and that when you sleep the brain tries to downscale all this huge daytime increase to a reasonable level by removing the unnecessary synapses. 45 So you could almost say you sleep to forget….in order to lift signal over noise and to start a new day refreshed and ready to learn again. Unnecessary connections and random memories get removed, while the important ones get stronger by being replayed. A recent study has provided a direct visual proof for this hypothesis: using extremely high-resolution microscopy the researchers first identified size and shape of 6920 synapses and then have shown that after a few hours of sleep 80% of the synapses shrank by ca. 18%. 46

     &nbsp; &nbsp;Large synapses in an awake mouse vs. shrunken synapses of a mouse who indulged in some sleep (Image Credit: med.wisc.edu)

       Large synapses in an awake mouse vs. shrunken synapses of a mouse who indulged in some sleep (Image Credit: med.wisc.edu)

    Of course there is no one correct answer -- the truth is somewhere in the middle, all these theories explaining some part of what is really going on. But now you know that you should think twice before pulling an all-nighter before an exam -- stressing out with Red Bull won’t make you remember stuff better, but some hours of restful ZZZ’s just might.


    Another theoreticized function of sleep is doing a little housekeeping. While you are sleeping the brain puts on the janitor robe and takes off to clear out all the junk that has accumulated there during all your daytime thinking. In a series of mice studies, researchers have discovered a system that drains waste products from the brain during sleep. 47 A brain equivalent of lymphatic system, a network of tiny channels flushing out waste by-products with cerebrospinal fluid, is responsible for that. Scientists called it a “glymphatic system” because, well, it functions like a lymphatic system but with the help of supportive brain cells: glial cells. See what they did there? When mice were asleep, the system went into overdrive (the awake flow was just 5% of the sleep flow!) and the brain cells even shrank in size to make the spaces around them easier to clean. The by-products that get flushed out include beta-amyloid protein, the criminal mind behind Alzheimer’s disease (it gets cleared out twice as fast during sleep as compared to wakefulness!), and other things associated with neurodegenerative disorders. So if you wanna to pull an all-nighter think about all these toxins that accumulate in your brain and hit the hay for a couple of hours instead.

    So while it is still not completely clear why we spend a third of our lives sleeping we seem to have pretty good pointers.

    Stay tuned for part II which will include fascinating infos on dreams, what can go wrong with our sleep and some advice on optimal sleeping practice!

    in Over the brainbow on March 12, 2018 06:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    We probably won’t hear from aliens. But by the time we do, they’ll be dead.

    Astronomers build on the Drake Equation to probe the chance that humans will find existing aliens. The answer: Not likely.

    in Science News on March 12, 2018 04:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Misconduct investigation reports are uneven at best. Here’s how to make them better.

    Retraction Watch readers may have noticed that over the past year or so, we have been making an effort to obtain and publish reports about institutional investigations into misconduct. That’s led to posts such as one about a case at the University of Colorado, Denver, one about the case of Frank Sauer, formerly of the … Continue reading Misconduct investigation reports are uneven at best. Here’s how to make them better.

    in Retraction watch on March 12, 2018 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Invertebrates: A Vastly Different Brain Structure Can Be Remarkably Efficient

    Have you ever wondered about how an alien’s brain works? What kind of information-processing system they may have and how it may differ from ours? There is no need to look any further. The answers to these questions can be found much closer to home, in insects and other invertebrates.

    Insects and humans have evolved quite differently, and thus they have a very different kind of neural system. Insects, jellyfishes, octopuses, and many other invertebrates have a very sophisticated nervous system, they display remarkably complex behavior, learning abilities, and level of intelligence. Yet, many of them lack the kind of brains we have. That is, they lack a centralized decision-making system.

    If we chart the evolution of the neural system, we can see that the earlier neural networks were more diffuse. The collections of neurons called ganglions make most of decisions locally (non-vertebrates have several ganglions), while some central decisions like the direction of movement of the whole body are made more democratically. In fact, it is still not fully known when the centralization of the nervous system started to take place.

    It is well accepted that Cnidarians have one of the most primitive neurological systems. This is from where other living beings evolved in terms of neural capabilities. Organisms of this evolutionary branch, like jellyfish, are still common today. Now it is known that their neural system is quite complex and has more intellectual capabilities than we had thought.

    A decentralized or diffuse nervous system has some fantastic capabilities. Some insects like Drosophila flies can stay alive for many days after being decapitated. They do not only survive without the head, they can fly, walk, and even copulate. Cockroaches would be able to remember things even if their brain was removed.

    Although diffuse nervous systems are more primitive, it does not imply the lack of intelligence. An example of a decentralized, large, and at the same time incredibly complex nervous system is in the octopus. Octopuses have the majority of neurons or collections of neurons (ganglions) located in their arms (tentacles). The arms of an octopus can do lots of things independently from each other; they can perform basic motions and they can touch or taste without any interference from the brain. Although octopuses have nothing common with vertebrates when it comes to neuroanatomy, yet they can learn things, recognize subjects, and perform complex tasks.

    We have erroneously come to see brain size as a measure of intelligence in species since we know that most representatives of the animal kingdom, particularly non-mammals, have far smaller brains than ours. But the correlation between brain size and intelligence is not linear. Just think of whales that have a brain weighing 9 kg and containing 200 billion neurons. A typical human brain, for comparison, is around 1.5 kg in weight and has 80 billion neurons. This is proof that not all kind of intellectual activities depend on brain size or the number of neurons.

    This lack of a direct correlation explains why some insects are more innovative than us or any other higher animals when it comes to socializing, forming colonies, or even learning from each other. Colonies of bees and ants have very complex social structures, where various members have clearly divided tasks. They have a complicated system of communicating with each other. They even have clear labor division, which includes the casts of slaves, farmers, and warriors. What is amazing is that such complex activities are achieved with the help of just a few million neurons.

    Now we understand that the nervous system doesn’t have to be centralized to function efficiently, and different forms of nervous systems have their pros and cons. Further, we know that neither the brain volume nor the number of neurons is an indicator of intelligence. However, we have beein maintaining the thought that, at least for handling, larger brain volumes of information need a larger number of neurons. But even this view is under revision now.

    Many complex behavioral reactions and responses can rely on just a few neurons and be independent of the brain. Think of reflexes such as the pain response that is a function of ganglions and not the brain. We are learning to appreciate the value of our gut feeling, as the gut has an enormous amount of neurons playing a much broader role in health.

    Understanding the existence of entirely different kinds of cognitive systems, neural systems, or information processing systems has many implications for human health. It forces us to look at our bodies from a different angle. It is entirely possible that a certain degree of non-centralized intelligence, and maybe even non-neural information processing, exists in our body.

    An excellent example of diffuseness of body systems is our endocrine system. The concept that endocrine functions are limited to some specific organs is becoming obsolete. We are now talking about diffuse endocrinology, as every organ and tissue secretes some endocrine hormones with various effects, be it the gut or fat. Similarly, many researchers are now talking about diffuse neuroendocrinology, as the two systems are very well connected.


    Ameri, P., & Ferone, D. (2012). Diffuse endocrine system, neuroendocrine tumors and immunity: what’s new? Neuroendocrinology, 95(4), 267–276. doi:10.1159/000334612

    Arendt, D., Denes, A. S., Jékely, G., & Tessmar-Raible, K. (2008). The evolution of nervous system centralization. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1496), 1523–1528. doi:10.1098/rstb.2007.2242

    Gagliano, M., Vyazovskiy, V. V., Borbély, A. A., Grimonprez, M., & Depczynski, M. (2016). Learning by Association in Plants. Scientific Reports, 6, 38427. doi:10.1038/srep38427

    Koizumi, O. (2016). Origin and Evolution of the Nervous System Considered from the Diffuse Nervous System of Cnidarians. In The Cnidaria, Past, Present and Future (pp. 73–91). Springer, Cham. doi:10.1007/978-3-319-31305-4_6
    Kuehnle, I., & Goodell, M. A. (2002). The therapeutic potential of stem cells from adults. BMJ?: British Medical Journal, 325(7360), 372–376.

    Image via mineev_88/Pixabay.

    in Brain Blogger on March 12, 2018 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How biology breaks the ‘cerebral mystique’

    The Biological Mind rejects the idea of the brain as the lone organ that makes us who we are. Our body and environment also factor in, Alan Jasanoff says.

    in Science News on March 12, 2018 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What does this part of brain do?

    Sarah Genon writes about how a change in perspective, and large data-banks, could help us to understand the brain’s functions.

    As you start reading these few lines you are engaging in a wide range of mental tasks, from reading the title sentence, to evaluating the option of further reading, maybe you visually screen the page to get an idea of its length and attractiveness before taking the decision to continue. Or perhaps you are thinking it is time for another coffee.

    Just by wondering to yourself “what actually am I supposed to do today?” or “do I actually have time to read this article?“, your mind and brain create a cascade of thoughts and mental functions.

    As neuroscientists, we wonder how this rich repertoire of mental functions is organized at the brain level. This remains a largely open question, and the main reason is actually that this is not the right question to ask.

    Our recently published article in the Journal TICS (https://doi.org/10.1016/j.tics.2018.01.010) described how different behavioural functions have been assigned to brain regions. In line with previous authors (Poldrack, 2011), we discuss why a shift in the viewpoint is needed and propose some concrete research perspectives to support this view. Here I summarise this discussion.

    Generally, asking the question “what does this part of the brain do?” raises in our mind ideas of functions that have been formulated from the study of behaviour. Humans have always tried to understand their own mind. We have pursued this aim through different behavioural sciences , mainly under the umbrella of psychology but also in related disciplines, such as philosophy, sociology, psychiatry, neurology and behavioural economics. In all these fields, researchers or clinicians have developed theories and models explaining behavioural processes, functions, and their interactions.

    This investigation of the human mind across centuries by so many different disciplines has resulted in several concepts about the mind, such as “phonological lexicon”, “recollection” or “theory of mind”. These concepts, their related models and theories have significantly advanced our understanding of the human mind and behaviour, and more importantly, they have contributed to our understanding and treatment of dysfunctions. As any aspects of behaviour originate from one unique substrate, the brain, the question naturally arises “how do all these concepts relate to the brain?”

    After a century of neuroanatomy of the human brain, it is well acknowledged that it is spatially organized into regions and networks. We could therefore intuitively expect that all the concepts from behavioural sciences could be assigned to specific brain regions and networks. Accordingly, several approaches have been used across the last decades to relate these concepts to the brain. To understand the outcomes of this multidisciplinary endeavour, one could begin by looking in the scientific literature for what researchers have written about the hippocampus. Many different concepts can be found, such as autobiographical memory (1), explicit memory (2), contextual memory (3), associative memory (4), incremental learning (5), recollection (6), encoding (7), retention (8), consolidation (9), novelty detection (10), spatial navigation (11), scene imagination (12), creative thinking, flexible cognition (13)… This looks like a conceptual chaos.

    However, it is certainly true that the hippocampus plays a role in autobiographical memory, scene imagination and spatial navigation. The point is that, all those concepts from the study of behaviour can be related to the hippocampus, but this is not what the hippocampus does… Let’s look at this question from the brain perspective. The brain, its many regions and networks do not “name object”, “feel empathy” or “imagine a scene”. This is what humans do, but this is not what the brain does.

    The current state of affairs in the study of brain structure-function relationships could be better understood by a simple metaphor. If several communities of researchers were studying the many final output functions that are performed by computers (such as a reminder for you to send an e-mail to your colleague for his birthday) while other communities of researchers were partitioning the hardware of the computers, then what would happen when those two groups of scientific communities tried to map their respective models and components to evidence produced by the other group? Obviously, a key would be missing to bridge the observed output to the hardware architecture: the list of basic functions underlying the many tasks that the computer can perform.

    But how can we disclose this list of basics functions? That is, how can we find out the true functions of brain areas and networks? There is no straightforward approach for solving this issue. One approach to progress in this issue would be first to collect the extended pattern of behavioural functions to which each region and network is associated, in order to progressively develop new hypotheses “from the brain point of view”.

    Across the last decades, tremendous efforts have been made to collect the results of neuroimaging activation studies in databases in such a way that it is now possible to identify, for any part of the brain, the hundreds of neuroimaging studies manipulating mental tasks that have reported activations in a given part of the brain. For example, if we look at the hippocampus, we can find that it has been activated during memory retrieval, spatial navigation, but also in relations to emotions and perceptual tasks.

    Brain scans and psychometrics data (such as personality traits, cognitive skills and behavioural habits) have also been acquired in big population samples across Europe and the US and are another resource to solve our current issue. From these big datasets of the population, it is possible to identify significant relationships between parts of the brain and psychometric data. That is, we can examine correlation between grey matter volume of a brain region and a range of behavioural measures tapping into everyday functioning, such as cognitive flexibility, anxiety or spatial memory.

    We could then combine the behavioural profile of any brain region revealed by the two types of scientific approaches (one based on aggregation of activation data and the other one based on big datasets of cerebral and behavioural measures). From this hybrid profiling of brain regions and networks, we could start developing new hypothesis on the basic operation that any part of the brain computes.

    Then, to disclose any of the “hidden” functions computed in the brain, we have to rely on the unequal ability of our brain to derive meaning from specific patterns. Rather than having each scientist looking at association between one concept and the brain, we would have many scientists seeing the same whole picture of many concepts associated to a specific brain region. We can hope that many scientist brains looking from the same (brain) view at a colourful pattern of associations at the behavioural level could come up with new hypotheses. Of course, the scientific path is still long and winding, until we clearly understand the function of any brain region, but harnessing data aggregation for “community-based discovery sciences” can be the first step.

    Sarah Genon is a researcher at the Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7) in the Research Centre Jülich (Germany). She is engaged in the HBP Subproject 2: Human Brain Organisation.


    1 Bonnici, H. M., Chadwick, M. J. & Maguire, E. A. Representations of recent and remote autobiographical memories in hippocampal subfields. Hippocampus 23, 849–854, doi:10.1002/hipo.22155 (2013).

    2 Eichenbaum, H. Hippocampus: Cognitive processes and neural representations that underlie declarative memory. Neuron 44, 109–120 (2004).

    3 Maren, S. & Holt, W. The hippocampus and contextual memory retrieval in Pavlovian conditioning. Behav Brain Res 110, 97–108 (2000).

    4 Stella, F. & Treves, A. Associative memory storage and retrieval: involvement of theta oscillations in hippocampal information processing. Neural plasticity 2011, 683961, doi:10.1155/2011/683961 (2011).

    5 Meeter, M., Myers, C. E. & Gluck, M. A. Integrating incremental learning and episodic memory models of the hippocampal region. Psychological review 112, 560–585, doi:10.1037/0033–295x.112.3.560 (2005).

    6 Montaldi, D. & Mayes, A. R. The role of recollection and familiarity in the functional differentiation of the medial temporal lobes. Hippocampus 20, 1291–1314, doi:10.1002/hipo.20853 (2010).

    7 Rebola, N., Carta, M. & Mulle, C. Operation and plasticity of hippocampal CA3 circuits: implications for memory encoding. Nature reviews. Neuroscience 18, 208–220, doi:10.1038/nrn.2017.10 (2017).

    8 Moscovitch, M., Nadel, L., Winocur, G., Gilboa, A. & Rosenbaum, R. S. The cognitive neuroscience of remote episodic, semantic and spatial memory. Current opinion in neurobiology 16, 179–190, doi:10.1016/j.conb.2006.03.013 (2006).

    Poldrack, R.A. (2011) Inferring mental states from neuroimaging data: from reverse inference to large-scale decoding. Neuron 72 (5), 692–697.

    9 Kitamura, T. et al. Engrams and circuits crucial for systems consolidation of a memory. Science 356, 73–78, doi:10.1126/science.aam6808 (2017).

    10 Kumaran, D. & Maguire, E. A. Which computational mechanisms operate in the hippocampus during novelty detection? Hippocampus 17, 735–748, doi:10.1002/hipo.20326 (2007).

    11 Chersi, F. & Burgess, N. The Cognitive Architecture of Spatial Navigation: Hippocampal and Striatal Contributions. Neuron 88, 64–77, doi:10.1016/j.neuron.2015.09.021 (2015).

    12 Hassabis, D., Kumaran, D., Vann, S. D. & Maguire, E. A. Patients with hippocampal amnesia cannot imagine new experiences. Proceedings of the National Academy of Sciences 104, 1726–1731 (2007).

    13 Duff, M. C., Kurczek, J., Rubin, R., Cohen, N. J. & Tranel, D. Hippocampal amnesia disrupts creative thinking. Hippocampus 23, 1143–1149, doi:10.1002/hipo.22208 (2013).

    What does this part of brain do? was originally published in Brain Byte Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

    in Brain Byte - The HBP blog on March 12, 2018 10:02 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Children with higher working memory are more inclined to finger count (and less able kids should be encouraged to do the same)

    GettyImages-833912150.jpgBy Christian Jarrett

    Finger counting by young kids has traditionally been frowned upon because it’s seen as babyish and a deterrent to using mental calculations. However, a new Swiss study in the Journal of Cognitive Psychology has found that six-year-olds who finger counted performed better at simple addition, especially if they used an efficient finger counting strategy. What’s more, it was the children with higher working memory ability – who you would expect to have less need for using their fingers – who were more inclined to finger count, and to do so in an efficient way. “Our study advocates for the promotion of finger use in arithmetic tasks during the first years of schooling,” said the researchers Justine Dupont-Boime and Catherine Thevenot at the Universities of Geneva and Lausanne.

    The 84 child volunteers were recruited from six different Swiss schools where the policy is not to teach finger counting explicitly, but not to discourage it either (except for very simple additions where the sum is less than 10).

    The researchers tested the children’s working memory using the backward digit span task, which involves hearing a string of numbers and repeating them back in reverse order. Children with higher working memory can accurately repeat back longer strings.

    The researchers also videoed the children discreetly while they performed, one child at a time, simple single-digit additions, some a bit trickier than others because they involved sums larger than 10 (some kids did the addition task before the memory tests, others afterwards). The researchers later coded the videos to see which kids counted on their fingers during the addition task, and which strategy they used.

    Fifty-two of the children finger counted, and there was a significant correlation between finger counting and better performance (for the easier and harder sums), and also between finger counting and higher working memory ability. The researchers think kids with poorer working memory struggle to discover finger counting for themselves, even though it would be advantageous if they used the right strategy.

    A problem for those kids with lower working memory ability who did finger count is that they tended to use a more laborious strategy that involves counting out both addends (i.e. numbers to be added) on their fingers, whereas the children with higher working memory ability favoured an efficient strategy that only involved using the fingers to count on from the first addend – for example, for 8+3, the child would only use three fingers to count on from eight. When the kids with lower working memory used the laborious finger strategy, they actually performed worse than if they used no fingers, especially for the harder sums. However, if they used the superior strategy, they did better at addition than those who didn’t use their fingers.

    “Explicitly teaching lower achievers to use the [more efficient finger counting] strategy could be very beneficial for them,” the researchers said, adding that “… repeatedly using fingers to solve arithmetic problems should allow children to progressively abandon this strategy for more mental procedures and, thus, allow children to become more and more performant through practice.”

    The new findings build upon a previous study that tested five-year-olds’ addition skills repeatedly over a three-year period and which found that finger counting correlated with superior performance up to, but not beyond, age 8.

    High working memory capacity favours the use of finger counting in six-year-old children

    Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on March 12, 2018 09:25 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Depression among new mothers is finally getting some attention

    Scientists search new mothers’ minds for clues to postpartum depression.

    in Science News on March 11, 2018 09:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Superconductors may shed light on the black hole information paradox

    Materials that conduct electricity without resistance might mimic black hole physics.

    in Science News on March 09, 2018 09:12 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What we do and don’t know about how to prevent gun violence

    Background checks work to prevent gun violence; concealed carry and stand-your-ground laws don’t. But lack of data makes it hard to make other links.

    in Science News on March 09, 2018 08:52 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What Does Any Part of the Brain Do?

    How can we know the function of a region of the brain? Have we been approaching the problem in the wrong way? An interesting new paper from German neuroscientists Sarah Genon and colleagues explores these questions. According to Genon et al., neuroscientists have generally approached the brain from the standpoint of behavior. We ask: what is the neural basis of this behavioral or psychological function? Traditionally, assigning functions to brain regions has mainly been based on conc

    in Discovery magazine - Neuroskeptic on March 09, 2018 08:43 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Museum mummies sport world’s oldest tattoo drawings

    A wild bull and symbolic designs were imprinted on the bodies of two Egyptians at least 5,000 years ago.

    in Science News on March 09, 2018 05:23 PM.