5G is already operative in a number of markets around the world. Within the next 10 years, the implementation will be complete. Then, we will see rapid advances in the digital revolution: reliable face recognition, augmented reality, elegant forms of machine learning, and the remaking of the practice of medicine enabled by nanotechnology. These phenomena will not merely change our culture—they will change the nature of reality itself. No aspect of human life will be untouched: the labor market will be disrupted, manufacturing will be disrupted, the economy will be disrupted, the government will be disrupted, education will be disrupted, the law will be disrupted, sex will be disrupted, the family will be disrupted, nature will be disrupted. As the digital frontier is expanded and settled, everything will be disrupted. A radical change to the very fabric of society is coming, and it is being deliberately pursued, with almost no deliberation as to its dangers or its costs.
There will be benefits. Due to technological advances in production and the monetary and material surpluses that will be produced, chances are that (globally-speaking) people will be healthier with greater material well-being. But these innovations in digital technology are not being driven by a benevolent desire to improve the quality of life on Earth; nor are they driven by a desire to protect individuals and consumers. Rather, the driver is the prospect of acquiring the truly unparalleled power that derives from an ever-increasing body of data that reaches down into the most minute dimensions of human existence. (That is to say nothing of the enormous prospects for financial gain.)
Protein structure prediction and design has progressed by leaps and bounds in the last decade, but most of those efforts were aimed at designing amino acid sequences that fold into a single thermodynamically stable, rock-solid structure. In contrast, “if you’re designing a protein and you want it to be metamorphic, you need to make sure that you make it relatively unstable, so that it will unfold and potentially switch on a relatively quick time frame, Volkman explained.
Baker added that it’s hard enough to design an amino acid sequence that adopts a single low-energy state; computing one that can adopt two different low-energy states with roughly equal probability is even harder. But given how useful these bistable switches could be, “designing proteins with multiple low-energy states is going to be really key,” he said. “I think this is a very important frontier for protein science.”
The game is the process, not the finished product. Importantly, when we play and make art, the products we make, the things we do are autotelic – they are their own end in themselves; as Hannah Arendt wrote: ‘only where we are confronted with things which exist independently of all utilitarian and functional references … do we speak of works of art.’ In this way, play could be considered as an anti-capitalistic activity.
let’s assume that Kurzweil is broadly correct that, at some point in this century, an AI will develop that outstrips all past digital intelligences. If it’s true that automata can then be as funny, romantic, loving and sexy as the best of us, it could also be assumed that they’d be capable of piety, reverence and faith. When it’s possible to make not just a wind-up clock monk, but a computer that’s actually capable of prayer, how then will faith respond?
This, I contend, will be the central cultural conflict for religion in this century. As focused as we are on the old touchstones that configure ideological divisions between the orthodox and heterodox, the mainline and the fringe, conservatives and liberals, with arguments about abortion, birth control, gay rights and so on dominating our understanding of cultural rift, it can be easy to eternalise those sectarian conflicts as having always existed. They weren’t always central in the past and they won’t always be the primary divisions in the future. Such issues must be historically and socially contextualised, and as they arose in light of certain political issues in the relatively contemporary era, so too will technology alter the sorts of disagreements that will mark religious division in the future. Right now, liberal and conservative religious thinkers disagree on when life begins, on the role of women in the Church and the status of LGBTQ+ believers. By the end of the century, there could very well be debates and denunciations, exegeses and excommunications about whether or not an AI is allowed to join a Church, allowed to serve as clergy, allowed to marry a biological human.
it could equally be argued that, just as evolutionary thought reinvigorated non-fundamentalist Christian faith (as with the Catholic theologian and Jesuit priest Pierre Teilhard de Chardin or the process theology of the philosopher Alfred North Whitehead), so too could artificial intelligence provide for a coming spiritual fecundity.
On the frontlines of the Nazi assault in Europe, however, a handful of scientists dared to disagree. As they saw it, the way to ensure the integrity of science was to enrich and deepen its connection to the public, not to sever it. The Austrian physicist Erwin Schrödinger, winner of the 1933 Nobel Prize for his contributions to the mind-bending new theory of the atom, was embarking on a second career as a popular science writer. A scientist didn’t truly understand a concept, Schrödinger argued, until he could explain it to a non-expert. Schrödinger stressed not the autonomy of science but the way it depended on something beyond empiricism – a faith in the essential universality of human perception. And he insisted that scientific discoveries gained in meaning by being shared as widely as possible, thereby multiplying the subjective experience of ‘discovery’.
Schrödinger wasn’t alone in calling for greater public engagement for the sciences. Like-minded contemporaries in central Europe included the social scientist Otto Neurath, the philosopher and historian Edgar Zilsel, the bacteriologist Ludwik Fleck, and the literary scholar Walter Benjamin. What united these five figures was the experience of witnessing how the forces of political reaction co-opted science and technology. From where they stood, it sure didn’t look like science was intrinsically democratic when left to its own devices. In response, Schrödinger and his contemporaries converged on a novel and radical principle: the importance of allowing the public to help steer the course of scientific research. What might we learn from their counterintuitive answer to the anti-science movement of the previous century?
In a 1931 radio broadcast for children, Benjamin reminded his listeners that much of modern science had originally been built on the fortuitous observations of ordinary people. Throughout the long 19th century, scientific fields ranging from botany and epidemiology to seismology and meteorology had depended on members of the public to furnish observations of plants, disease symptoms, tremors, storms and more. This led to lively communication between scientists and laypeople, as well as to efforts to keep the sciences as jargon-free as possible. Medical experts eschewed Latinisms in favour of terms their patients used to describe their own experiences of illness; meteorologists formulated wind scales and cloud taxonomies on the basis of the lingoes of sailors and farmers; and geologists came up with terms for seismology that corresponded to the felt reports of earthquake survivors.
Fleck traced the origin of modern science to a shared, everyday sphere of human activities, writing:
Surely there had always existed thinking typical of the natural sciences. It was to be found among the artisans, the seamen, the barber-surgeons, the leather workers and saddlers, the gardeners and probably also among children playing. Wherever serious or playful work was done by many, where common or opposite interests met repeatedly, this uniquely democratic way of thinking was indispensable.
By ‘democratic’, Fleck meant a form of knowledge that both resulted from and served a free and open confrontation among different points of view. Scientific knowledge was robust precisely because it was not the product of a single mind, but rather ‘democratically constructed’ by a mass collective, free to contest and refine it. ‘Natural science is the art of shaping a democratic reality and being guided by it – thus being reshaped by it,’ he said.
This is a very important signal of the emergence of a new economic paradigm that includes Modern Monetary Theory. For anyone holding conventional beliefs in the causes of inflation - this is a must read.
Neither do rapid growth in government debt, declining interest rates, or rapid Increases in a central bank’s balance sheet
Monetarist theory, which came to dominate economic thinking in the 1980s and the decades that followed, holds that rapid money supply growth is the cause of inflation. The theory, however, fails an actual test of the available evidence. In our review of 47 countries, generally from 1960 forward, we found that more often than not high inflation does not follow rapid money supply growth, and in contrast to this, high inflation has occurred frequently when it has not been preceded by rapid money supply growth.
The purpose of this paper is to present these findings and solicit feedback on our data, methods, and conclusions.
To analyze the issue, we developed a database of 47 countries that together constitute 91 percent of global GDP and looked at each episode of rapid money supply growth to see if it was followed by high inflation. In the majority of cases, it was not. In fact, the opposite was true—a large percentage of the cases of high inflation were not preceded by high money supply growth. These 47 countries all rank within the top 70 largest economies as measured by GDP and include each of the top 20 countries. If a country was not included, it was because we could not get a complete enough set of historical data on that country.
CONCLUSION
Based on our examination of countries that together constitute 91 percent of world GDP, we suggest that high inflation has infrequently followed rapid money supply growth, and in contrast to this, high inflation has occurred often when it has not been preceded by rapid money supply growth. The U.S. economy may well experience some increase in inflation in the coming year, but if it does, it is likely it will be due to factors other than monetary policy.
The future of the digital environment can be a foundation for democracy or not - it depends on how we create the right institutions, public infrastructure and protections.
Building for the Public Stack starts with the many initiatives and technologies that are out there already. And there are many. Technologies, collectives, programmes and initiatives; working on the Public Stack can take many forms. Public Stack
When you look at technology, you see only the outside. You see the screen of your tightly sealed phone or the open tabs in your browser. Those interfaces are there for you, the user: but there’s lots of complexity going on beneath the surface, meticulously crammed into the objects you depend on every day.
For companies, selling you a phone is not enough: they want to know what you do with that phone, to make more money. This agenda means your phone is the result of a ‘private stack’. All the layers work together to achieve the commercial aims of its producer.
Many applications are also the result of private stacks. Often, especially when the app is free to install, it will ask you to agree to a long list of terms and conditions – essentially a contract – which outlines how it may use the data it collects from you. Those uses are manifold, but they have one thing in common: they make money for the developer of the app at the expense of your privacy.
Almost all apps revolve around data mining and behaviour manipulation. We are bombarded on social media and search engines with clickbait, micro-targeting, advertisements and disinformation: the business models of many tech companies are based on spying on users and the reselling and exploiting of their personal data. In addition, the vast majority of online applications are owned by only a few large tech companies. Waag want that to change.
They want us to strive for a Public Stack based on the idea that all these layers should be developed from public values.
This too is a long must read about a more substantially democratic Internet - one that has outgrown its wild west phase and has developed better institutions for conversation. Imagine if technology was a servant of people rather than of business models?
Altering the internet's economic and digital infrastructure to promote free speech
This article proposes an entirely different approach—one that might seem counterintuitive but might actually provide for a workable plan that enables more free speech, while minimizing the impact of trolling, hateful speech, and large-scale disinformation efforts. As a bonus, it also might help the users of these platforms regain control of their privacy. And to top it all off, it could even provide an entirely new revenue stream for these platforms.
That approach: build protocols, not platforms.
To be clear, this is an approach that would bring us back to the way the internet used to be. The early internet involved many different protocols—instructions and standards that anyone could then use to build a compatible interface. Email used SMTP (Simple Mail Transfer Protocol). Chat was done over IRC (Internet Relay Chat). Usenet served as a distributed discussion system using NNTP (Network News Transfer Protocol). The World Wide Web itself was its own protocol: HyperText Transfer Protocol, or HTTP.
In the past few decades, however, rather than building new protocols, the internet has grown up around controlled platforms that are privately owned. These can function in ways that appear similar to the earlier protocols, but they are controlled by a single entity. This has happened for a variety of reasons. Obviously, a single entity controlling a platform can then profit off of it. In addition, having a single entity can often mean that new features, upgrades, bug fixes, and the like can be rolled out much more quickly, in ways that would increase the user base.
Indeed, some of the platforms today are leveraging existing open protocols but have built up walls around them, locking users in, rather than merely providing an interface.
This actually highlights that there is not an either/or choice here between platforms and protocols but rather a spectrum. However, the argument presented here is that we need to move much more to a world of open protocols, rather than platforms.
Hopefully this is a good signal for progress in the protection of a fair ‘good-enough’ level digital environment and agora.
Big Tech got big through acquisitions—and this bill aims to prevent that in the future.
With a new session of Congress underway and a new administration in the White House, Big Tech is once again in lawmakers' crosshairs. Not only are major firms such as Apple, Amazon, Facebook, and Google under investigation for allegedly breaking existing antitrust law, but a newly proposed bill in the Senate would make it harder for these and other firms to become so troublingly large in the first place.
The bill (PDF), called the Competition and Antitrust Law Enforcement Reform Act (CALERA for short, which is still awkward) would become the largest overhaul to US antitrust regulation in at least 45 years if it became law.
Klobuchar's bill would shift that burden in the other direction for businesses that already have a dominant market position. Those companies—which in tech would absolutely include firms such as Amazon, Google, and Facebook—would proactively have to demonstrate that a merger would not "create an appreciable risk of materially lessening competition," in addition to not creating a monopoly or monopsony.
A monopsony is effectively the same problem as a monopoly—excessively concentrated market power—but inverted. Instead of there being only one seller, a monopsony is a situation in which there may be many sellers but only one buyer.
This is definitely a good signal of the emergence of the star trek tricorder - sometime in the next …..
The Google Fit app will measure heart and respiratory rate
Google is adding heart and respiratory rate monitors to the Fit app on Pixel phones this month, and it plans to add them to other Android phones in the future. Both features rely on the smartphone camera: it measures respiratory rate by monitoring the rise and fall of a user’s chest, and heart rate by tracking color change as blood moves through the fingertip.
The features are only intended to let users track overall wellness and cannot evaluate or diagnose medical conditions, the company said.
To measure respiratory rate (the number of breaths someone takes per minute) using the app, users point the phone’s front-facing camera at their head and chest. To measure heart rate, they place their finger over the rear-facing camera.
A fascinating article explaining fundamental new matter and the breaking of the conservation of energy law.
They were supposed to be impossible
These special crystals were theorized in 2012 by physicist and mathematician Frank Wilczek who suggested crystals’ structural repetition could exist in the 4th dimension as well as it did in the 3rd. It wasn’t until 2016 that the first blueprint for how to make these a possibility really surfaced.
Time crystals have been created several times. First by researchers at the University of Maryland who used ytterbium atoms and entangled them in repeating patterns using a magnetic field. A second laser then moved the atoms and they eventually exhibited a pattern different from the one created by the laser. Researchers at Harvard used molecules from nitrogen impurities in diamonds and employed microwaves to cause the ions to flip and oscillate. Since then, time crystals have been made twice more. Once with a solid material called monoammonium phosphate and the second with a liquid containing special star-shaped clusters of molecules.
Since their discovery, time crystals have been likened to a perpetual-motion machine — a machine that can continue to move and function without an energy source. But while they come close to the description, time crystals are different in that no energy can be extracted from them because they are already in their ground state. However, time crystals do seem to have a promising future.
This is a great signal of our growing understanding of how each individual is constituted by an personal microbial ecology that contributes to every dimension of our health.
Scientists are starting to work out how the gut microbiome can affect brain health. That might lead to better and easier treatments for brain diseases.
Today, however, the gut–brain axis is a feature at major neuroscience meetings, and Cryan says he is no longer “this crazy guy from Ireland”. Thousands of publications over the past decade have revealed that the trillions of bacteria in the gut could have profound effects on the brain, and might be tied to a whole host of disorders. Funders such as the US National Institutes of Health are investing millions of dollars in exploring the connection.
But along with that explosion of interest has come hype. Some gut–brain researchers claim or imply causal relationships when many studies show only correlations, and shaky ones at that, says Maureen O’Malley, a philosopher at the University of Sydney in Australia who studies the field of microbiome research. “Have you found an actual cause, or have you found just another effect?”
In recent years, however, the field has made significant strides, O’Malley says. Rather than talking about the microbiome as a whole, some research teams have begun drilling down to identify specific microbes, mapping out the complex and sometimes surprising pathways that connect them to the brain. “That is what allows causal attributions to be made,” she says. Studies in mice — and preliminary work in humans — suggest that microbes can trigger or alter the course of conditions such as Parkinson’s disease, autism spectrum disorder and more (see ‘Possible pathways to the brain’). Therapies aimed at tweaking the microbiome could help to prevent or treat these diseases, an idea that some researchers and companies are already testing in human clinical trials.
And not just our mental health but also how our effective some of medicines can be.
Gut microbiota have been linked to the success and failure of multiple cancer treatments, including chemotherapy and cancer immunotherapy with immune checkpoint inhibitors such as nivolumab and pembrolizumab. In the more recent studies, the species and relative populations of gut bacteria determined the probability that a cancer patient would respond to drugs known as “immune checkpoint inhibitors.”
The effect of a drug, or impact of a treatment like chemotherapy, doesn’t just depend on your body. The success of a particular medicine also depends on the trillions of bacteria in your gut.
The 100 trillion bacteria that live within the human digestive tract – known as the human gut microbiome – help us extract nutrients from food, boost the immune response and modulate the effects of drugs. Recent research, including my own, has implicated the gut microbiome in seemingly unconnected states, ranging from the response to cancer treatments to obesity and a host of neurological diseases, including Alzheimer’s, Parkinson’s disease, depression, schizophrenia and autism.
What underlies these apparently discrete observations is the unifying idea that the gut microbiota send signals beyond the gut and that these signals have broad effects on a large swathe of target tissues.
So is it i don’t want to think -
when I mahjong-nesthestize -
or is it that -
i want to space-out into partial-attentioning -
visual pattern-hunting -
and
aural reason-gathering
fractal minding -
partial-attention syn-droning -
self-dividuating -
superpositioning -