As he prepared his lectures on the ‘future of man’, Medawar speculated that biological ‘fitness’ was in fact best understood as an economic phenomenon:
[I]t is, in effect, a system of pricing the endowment of organisms in the currency of offspring: ie, in terms of net reproductive performance.
Making such a connection – between the hidden hand of nature and the apparently impartial decisions of the market – was a hot way to read Popper. His greatest fans outside the scientific community were, in fact, economists. At the London School of Economics, Popper was close to the neoliberal theorist Friedrich Hayek. He also taught the soon-to-be billionaire George Soros, who named his Open Society Foundations (formerly, the Open Society Institute) after Popper’s most famous book. Along with Hayek and several others, Popper founded the Mont Pelerin Society, promoting marketisation and privatisation around the world.
Popper’s appointment to a fellowship at the Royal Society marked the demise of a powerful strand of socialist leadership in British science that had begun in the 1930s with the cadre of talented and public-facing researchers (J D Bernal, J B S Haldane and others) whom the historian Gary Werskey in 1978 dubbed ‘the visible college’. Indeed, Popper had encountered many of them during his prewar visits to the Theoretical Biology Club. While they were sharpening their complex science against the edge of Popper’s philosophy, he might well have been whetting his anti-Marxist inclinations against their socialised vision of science – even, perhaps, their personalities. What Popper did in The Open Society was take the biologists’ politicising of science and attach it to antifascism. Science and politics were connected, but not in the way that the socialists claimed. Rather, science was a special example of the general liberal virtues that can be cultivated only in the absence of tyranny.
After the war, the commitment of visible-college scientists to nation-building saw them involved in many areas of governmental, educational and public life. The Popperians hated them. In The Road to Serfdom (1944), Hayek warned that they were ‘totalitarians in our midst’, plotting to create a Marxist regime. They should leave well alone, he argued, and accept that their lab work bore no connection to social questions. Hayek’s bracketing off of governance was no more plausible in science than it was in economics. The greatest myth of neoliberalism is that it represents a neutral political perspective – a commitment to non-meddling – when in fact it must be sustained through aggressive pro-business propaganda and the suppression of organised labour. So, while Soros’s social activism has done much good in the world, it has been funded through economic activity that depends upon a systematic repression of debate and of human beings for its success. Having a philosophical cover-story for this kind of neoliberalism, that likens it to (Popperian) science, does it no harm at all.
Science is profoundly altered when considered analogous to the open market. The notion that scientific theories vie with one another in open competition overlooks the fact that research ambitions and funding choices are shaped by both big-p and small-p politics. There is a reason why more scientific progress has been made in drugs for the treatment of diseases of wealth than of poverty. Moreover, career success in science – which shapes future research agendas when a person becomes a leader in their field – is a matter profoundly inflected by gender, race, class and dis/ability.
Many people want to translate their political and ideological interests into economic terms. They want to somehow design an economic system that makes, for example, Mars missions so expensive we don’t do them, and saving lives so cheap, we never fail save them. But this is fundamentally wrong-headed.
If you can’t come to terms with the diversity and variety of things humans want and value, and are willing to work for, you will want to design economic models to coerce them to act differently. This is a version of what statisticians call the bias-variance tradeoff, which I’m using as a metaphor here. The more you try to bias an economic system to do certain things, the more you’ll narrow the overall range of things it can do.
Predictive coding posits that the brain is constantly making predictions about the causes of sensory inputs. The process involves hierarchical layers of neural processing. To produce a certain output, each layer has to predict the neural activity of the layer below. If the highest layer expects to see a face, it predicts the activity of the layer below that can justify this perception. The layer below makes similar predictions about what to expect from the one beneath it, and so on. The lowest layer makes predictions about actual sensory input — say, the photons falling on the retina. In this way, predictions flow from the higher layers down to the lower layers.
But errors can occur at each level of the hierarchy: differences between the prediction that a layer makes about the input it expects and the actual input. The bottommost layer adjusts its synaptic weights to minimize its error, based on the sensory information it receives. This adjustment results in an error between the newly updated lowest layer and the one above, so the higher layer has to readjust its synaptic weights to minimize its prediction error. These error signals ripple upward. The network goes back and forth, until each layer has minimized its prediction error.
In May 2010, a group of mathematicians gathered at a small research institute in Barbados where they spent sunny days discussing math just a few dozen feet from the beach. Even the lecture facilities — with no walls and simple wooden benches — left them as close to nature as possible.
“One evening when it was raining you couldn’t even hear people, because of the rain on the metal roof,” said Silverman.
The conference was a pivotal moment in the development of arithmetic dynamics. It brought together experts from number theory, like Silverman, and dynamical systems, like DeMarco and Krieger. Their goal was to expand the types of problems that could be addressed by combining the two perspectives.
One more signal - by Brian Arthur - of the emerging new economic paradigm better suited to the way life, politics and the economy really work.
The economics I will describe here drops the assumptions of equilibrium and rationality. But it did not come from an attempt to discard standard assumptions, rather it came from a pathway of thinking about how the economy actually works.
By its definition, equilibrium makes no allowance for the creation of new products or new arrangements, for the formation of new institutions, for exploring new strategies, for events triggering novel events, indeed, for history itself. All these have had to be discarded from the theory. “The steady advance of equilibrium theory throughout the twentieth century,” says David Simpson, “remorselessly obliterated all ideas that did not fit conveniently into its set of assumptions.”
rational behaviour is not well-defined. Therefore, there is no ‘optimal’ set of moves, no optimal behaviour. Faced with this — with fundamental uncertainty, ill-defined problems and undefined rationality — standard economics understandably comes to a halt. It is not obvious how to get further.
Abstract
Conventional, neoclassical economics assumes perfectly rational agents (firms, consumers, investors) who face well-defined problems and arrive at optimal behaviour consistent with — in equilibrium with — the overall outcome caused by this behaviour. This rational, equilibrium system produces an elegant economics, but is restrictive and often unrealistic. Complexity economics relaxes these assumptions. It assumes that agents differ, that they have imperfect information about other agents and must, therefore, try to make sense of the situation they face. Agents explore, react and constantly change their actions and strategies in response to the outcome they mutually create. The resulting outcome may not be in equilibrium and may display patterns and emergent phenomena not visible to equilibrium analysis. The economy becomes something not given and existing but constantly forming from a developing set of actions, strategies and beliefs — something not mechanistic, static, timeless and perfect but organic, always creating itself, alive and full of messy vitality.
A signal of a possible future of the digital environment if we let business models enclose the commons of adversarial interoperability.
Since its founding in the 1930s, Hewlett-Packard has been synonymous with innovation, and many's the engineer who had cause to praise its workhorse oscillators, minicomputers, servers, and PCs. But since the turn of this century, the company's changed its name to HP and its focus to sleazy ways to part unhappy printer owners from their money. Printer companies have long excelled at this dishonorable practice, but HP is truly an innovator, the industry-leading Darth Vader of sleaze, always ready to strong-arm you into a "deal" and then alter it later to tilt things even further to its advantage.
The company's just beat its own record, converting its "Free ink for life" plan into a "Pay us $0.99 every month for the rest of your life or your printer stops working" plan.
Printers are grifter magnets, and the whole industry has been fighting a cold war with its customers since the first clever entrepreneur got the idea of refilling a cartridge and settling for mere astronomical profits, thus undercutting the manufacturers' truly galactic margins. This prompted an arms race in which the printer manufacturers devote ever more ingenuity to locking third-party refills, chips, and cartridges out of printers, despite the fact that no customer has ever asked for this.
As spooky and exotic as quantum physics is - it continues to find applications in the ongoing progress of other sciences. This is one great signal.
because the photons are entangled as a single 'non-local' particle, the phase shifts experienced by each photon individually are simultaneously shared by both.
"The process we've developed frees us from those limitations of classical coherence and ushers holography into the quantum realm. Using entangled photons offers new ways to create sharper, more richly detailed holograms, which open up new possibilities for practical applications of the technique.
A new type of quantum holography which uses entangled photons to overcome the limitations of conventional holographic approaches could lead to improved medical imaging and speed the advance of quantum information science.
A team of physicists from the University of Glasgow are the first in the world to find a way to use quantum-entangled photons to encode information in a hologram. The process behind their breakthrough is outlined in a paper published today in the journal Nature Physics.
Holography is familiar to many from its use as security images printed on credit cards and passports, but it has many other practical applications, including data storage, medical imaging and defence.
The Glasgow team's new quantum holography process also uses a beam of laser light split into two paths, but, unlike in classical holography, the beams are never reunited. Instead, the process harnesses the unique properties of quantum entanglement—a process Einstein famously called 'spooky action at a distance' – to gather the coherence information required to construct a holograph even though the beams are forever parted.
This year is the 20th birthday of the first sequencing of the human genome. And while a great deal of progress has been made - we are still in the toddler stage of our domestication of the complexity of DNA.
it isn’t really finished; gaps remain in the more than 3 billion DNA letter long template, especially in stretches of repetitive DNA. Those are holes where the technology that built the reference doesn’t do a good job of reading every letter. Scientists know there is DNA there, just not how much nor how the letters are arranged. And despite being a compilation of more than 60 people’s DNA, the reference doesn’t fully encapsulate the full range of human genetic diversity.
In a 2019 study of 910 people of African descent, researchers discovered an additional 296.5 million DNA bases that aren’t in the current reference.
Efforts are under way to capture all human genetic diversity and catalog missing DNA
As the master blueprint for building humans turns 20, researchers are both celebrating the landmark achievement and looking for ways to bolster its shortcomings.
The Human Genome Project — which built the blueprint, called the human reference genome — has changed the way medical research is conducted, says Ting Wang, a geneticist at Washington University School of Medicine in St. Louis. “It’s highly, highly valuable.”
For instance, before the project, drugs were developed by serendipity, but having the master blueprint led to the development of therapies that could specifically target certain biological processes. As a result, more than 2,000 drugs aimed at specific human genes or proteins have been approved. The reference genome has also made it possible to untangle complicated networks involved in regulating gene activity (SN: 9/5/12) and learn more about how chemical modifications to DNA tweak that activity (SN: 2/18/15). It has also led to the discovery of thousands of genes that don’t make proteins, but instead make many different useful RNAs (SN: 4/7/19). Researchers lay out those accomplishments and others February 10 in Nature.
“That said, the human reference genome we use has certain limitations,” Wang says.
It’s a beta world - and the Internet is still a toddler. This is a quantum signal of a grade school Internet.
Experiment connects three devices with entangled photons, demonstrating a key technique that could enable a future quantum internet.
Physicists have taken a major step towards a future quantum version of the Internet by linking three quantum devices in a network. A quantum internet would enable ultrasecure communications and unlock scientific applications such as new types of sensor for gravitational waves, and telescopes with unprecedented resolution. The results were reported1 on 8 February on the arXiv preprint repository.
“It’s a big step forward,” says Rodney Van Meter, a quantum-network engineer at Keio University in Tokyo. Although the network doesn’t yet have the performance needed for practical applications, Van Meter adds, it demonstrates a key technique that will enable a quantum internet to connect nodes over long distances.
Quantum communications exploit phenomena that are unique to the quantum realm — such as the ability of elementary particles or atoms to exist in a ‘superposition’ of multiple simultaneous states, or to share an ‘entangled’ state with other particles. Researchers had demonstrated2 the principles of a three-node quantum network before, but the latest approach could more readily lead to practical applications.
I’ve included articles related to the likely shift in magnetic poles - undetermined time in this millenia … or the next. This is an interesting signal (with two nice 1.5min videos) liking the past to ….
The Earth’s magnetic field has weakened by about 9% over the past 170 years, and the researchers say another flip could be on the cards. Such a situation could have a dramatic effect, not least by devastating electricity grids and satellite networks.
Event 42,000 years ago combined with fall in solar activity potentially cataclysmic, researchers say
The flipping of the Earth’s magnetic poles together with a drop in solar activity 42,000 years ago could have generated an apocalyptic environment that may have played a role in a major events ranging from the extinction of megafauna to the end of the Neanderthals, researchers say.
The Earth’s magnetic field acts as a protective shield against damaging cosmic radiation, but when the poles switch, as has occurred many times in the past, the protective shield weakens dramatically and leaves the planet exposed to high energy particles.
One temporary flip of the poles, known as the Laschamps excursion, happened 42,000 years ago and lasted for about 1,000 years. Previous work found little evidence that the event had a profound impact on the planet, possibly because the focus had not been on the period during which the poles were actually shifting, researchers say.
Now scientists say the flip, together with a period of low solar activity, could have been behind a vast array of climatic and environmental phenomena with dramatic ramifications. “It probably would have seemed like the end of days,” said Prof Chris Turney of the University of New South Wales and co-author of the study.
While there is no doubt that human are contributing to climate change - the future may have more uncertainty than our models can reveal. We must always be prepared to expect the unexpected and unpredicted. This is a weak signal of an unexpected future.
Antarctic iceberg melt could hold the key to the activation of a series of mechanisms that cause the Earth to suffer prolonged periods of global cooling, according to Francisco J. Jiménez-Espejo, a researcher at the Andalusian Earth Sciences Institute (CSIC-UGR), whose discoveries were recently published in Nature.
It has long been known that changes in the Earth's orbit as it moves around the sun trigger the beginning or end of glacial periods by affecting the amount of solar radiation that reaches the planet's surface. However, until now, the question of how small variations in the solar energy that reaches Earth can lead to such dramatic shifts in the planet's climate has remained a mystery.
In this new study, a multinational group of researchers proposes that when the Earth's orbit around the sun is just right, the Antarctic icebergs begin to melt further and further away from the continent, moving huge volumes of freshwater from the Antarctic Ocean into the Atlantic.
This process causes the Antarctic Ocean to become increasingly salty, while the Atlantic Ocean becomes fresher, affecting overall ocean circulation patterns, drawing CO2 from the atmosphere and reducing the so-called greenhouse effect. These are the initial stages that mark the beginning of an ice age on the planet.
And always we learn ever more complexity in the nature of …. well nature.
Aerosols’ overall role on climate sensitivity remains unclear; estimates in the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report suggest a moderate cooling effect, but the error bars range from a net warming effect to a more significant cooling effect.
To climate scientists, clouds are powerful, pillowy paradoxes: They can simultaneously reflect away the sun’s heat but also trap it in the atmosphere; they can be products of warming temperatures but can also amplify their effects. Now, while studying the atmospheric chemistry that produces clouds, researchers have uncovered an unexpectedly potent natural process that seeds their growth. They further suggest that, as the Earth continues to warm from rising levels of greenhouse gases, this process could be a major new mechanism for accelerating the loss of sea ice at the poles — one that no global climate model currently incorporates.
This discovery emerged from studies of aerosols, the tiny particles suspended in air onto which water vapor condenses to form clouds. As described this month in a paper in Science, researchers have identified a powerful overlooked source of cloud-making aerosols in pristine, remote environments: iodine.
The full climate impact of this mechanism still needs to be assessed carefully, but tiny modifications in the behavior of aerosols, which are treated as an input in climate models, can have huge consequences, according to Andrew Gettelman, a senior scientist at the National Center for Atmospheric Research (NCAR) who helps run the organization’s climate models and who was not involved in the study. And one consequence “will definitely be to accelerate melting in the Arctic region,” said Jasper Kirkby, an experimental physicist at CERN who leads the Cosmics Leaving Outdoor Droplets (CLOUD) experiment and a coauthor of the new study.
Was it Freud or Jung who claimed that dreams are the royal road to the unconscious? Meditation enabled me to become very familiar with lucid dreaming - and certainly opened a window to previously non-conscious processing my mind engaged in. This may signal the development of new forms of technology - bringing to our conscious mind a vast processing capacity.
Four independent experiments across the globe have found that it's possible to establish two-way communications with people in the weird, hallucinatory state of lucid dreaming, opening up a new field of real-time "interactive dreaming" research.
This is a big deal for scientists trying to work out what the heck is going on as we sleep, because typically they've had to rely on the fragmented, fading scraps of memory people have once they've woken up. "Our experimental goal is akin to finding a way to talk with an astronaut who is on another world," reads the introduction of a combined study between four separate groups in France, Germany, the Netherlands and the USA.
Each group set out to test its own techniques on how to "interview" people without waking them up, using the bizarre phenomenon of lucid dreaming as a doorway into the dream world. During regular dreams, we typically have no idea that we're dreaming, simply accepting the strange situations we're placed in without critical judgement. Lucid dreaming, a "notoriously rare phenomenon," is a state where the sleeper is aware that they're dreaming, and sometimes capable of steering their experience.
The researchers took one group of experienced lucid dreamers, another of regular folk that they had trained in the art of lucid dreaming, and one patient with narcolepsy who frequently drifted in and out of lucid dream states – and found they were able to have two-way exchanges with members of all three groups.
And another fascinating signal related to awakening of another sort.
Two out of three people who received noninvasive ultrasound appear to have gained some level of consciousness, according to preliminary trial results.
Most people with what doctors call disorders of consciousness—which include vegetative states and the less-severe minimally conscious state, both marked by a lack of wakefulness and awareness—awaken within months. But for a small subset of people who suffer a severe brain injury and then remain in a coma for a year or more, “if they don’t recover spontaneously—they don’t enter the trajectory of recovery—there isn’t much that we can do for them once they remain stably in a disorder of consciousness,” he says.
Monti is trying to change that. In a study published online last month in Brain Stimulation, Monti and his colleagues report preliminary results from a trial in which they used ultrasound to noninvasively stimulate an area known as the thalamus in patients with long-term disorders of consciousness. Of the three patients included in the write-up, two showed behavioral improvements, such as responding to simple commands and, for one, gaining the ability to motion yes or no in answer to questions.
To live is to anticipate -
being -
anticipating-becoming -
evolution -
context-afjord-dancing -
constrained by gradient-con-texts -
in con-text intensities -
in-formation fields -
fractaling margin-meme-brains
mhm -
A thing can’t be commodified -
without a complex -
enabling infrastructure ecology -
- of collective-tacit-know-how -
- of collabooperation -
the power of informational-network-effects -
rent-seeking -
is the denial of original debt -
to enabling infrastructure commons -
The technologies of -
fidelity-copy-ing -
are easy -
compared to -
technologies of fidelity of -
enablemeants -
because meants -
are always -
in-formation -
after-the-fact -