Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.) that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.
Many thanks to those who enjoy this. ☺
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - Work is just beginning.
Work that engages our whole self becomes play that works.
Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How
In the 21st century - the planet is the little school house in the galaxy.
Citizenship is the battlefield of the 21st Century
“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9
Content
Quotes:
Articles:
Here is a thought experiment. I give you a ride in a time machine. It has only one lever. You can choose to go forward in time, or backwards. All trips are one-way. Whenever you arrive, you arrive as a newborn baby. Where you land is random, and so are your parents. You might be born rich or poor, male or female, dark or light, healthy or sick, wanted or unwanted.
Your only choice is whether you choose to be thrust forward in time, spending your new life in some random future in some random place, or thrust into the past, in some random time and random place. I have not met anyone yet who would point the lever to the past. (If you would, leave a comment why.) Even if we constrained the time machine to jump mere decades away, everyone points it to the future. For while we can certainly select certain places, certain eras in the past that seem attractive, their attractiveness disappears if we arrive as a servant, a slave, an outcast ethnicity, or even as a farmer during a drought, or during never-ending raiding and wars.
The only argument I’ve heard for choosing the past is that the downsides are known; you have a randomized chance of being a slave, or the fourth wife, or a Roman miner, while the downsides of some future date are unknown and could possibly be worse. Perhaps there is no civilization at all in 500 years, and you therefore arrive in a toxic wasteland, or all humans are enslaved to robots. In this calculus the known horror is preferred to unknown horrors.
But since this is random placement, there is still a higher chance you’d get a bearable life in the future, even if you were at the bottom of that society, than you’d randomly for sure get in the past. If we have any sense of what the past was really like, we intuitively know that today is much better than the past. This difference is probable (not guaranteed) to be true of a future date; it is highly likely no one born in 2070 would want to be born in 2020.
The denial of progress is directly linked to ignorance of the past. There are romantic notions of the past that are not based on evidence; some of these lovely visions of the past are not untrue; it’s just that they are select, rare, privileged slivers that disregard the actual state of most humans for most times in most places, which any serious inquiry into global history will reveal. Today there is still huge discrepancy between the well-off of the world and the bulk mass of most humans in most places. But the point of the time machine thought experiment is that virtually everyone would rather be at the bottom today than at the bottom 200+ years ago.
Kevin Kelly - Progress and the Randomized Time Machine
Imagine if the people of the Soviet Union had never heard of communism. The ideology that dominates our lives has, for most of us, no name. Mention it in conversation and you’ll be rewarded with a shrug. Even if your listeners have heard the term before, they will struggle to define it. Neoliberalism: do you know what it is?
Its anonymity is both a symptom and cause of its power. It has played a major role in a remarkable variety of crises: the financial meltdown of 2007‑8, the offshoring of wealth and power, of which the Panama Papers offer us merely a glimpse, the slow collapse of public health and education, resurgent child poverty, the epidemic of loneliness, the collapse of ecosystems, the rise of Donald Trump. But we respond to these crises as if they emerge in isolation, apparently unaware that they have all been either catalysed or exacerbated by the same coherent philosophy; a philosophy that has – or had – a name. What greater power can there be than to operate namelessly?
So pervasive has neoliberalism become that we seldom even recognise it as an ideology. We appear to accept the proposition that this utopian, millenarian faith describes a neutral force; a kind of biological law, like Darwin’s theory of evolution. But the philosophy arose as a conscious attempt to reshape human life and shift the locus of power.
Neoliberalism sees competition as the defining characteristic of human relations. It redefines citizens as consumers, whose democratic choices are best exercised by buying and selling, a process that rewards merit and punishes inefficiency. It maintains that “the market” delivers benefits that could never be achieved by planning.
Attempts to limit competition are treated as inimical to liberty. Tax and regulation should be minimised, public services should be privatised. The organisation of labour and collective bargaining by trade unions are portrayed as market distortions that impede the formation of a natural hierarchy of winners and losers. Inequality is recast as virtuous: a reward for utility and a generator of wealth, which trickles down to enrich everyone. Efforts to create a more equal society are both counterproductive and morally corrosive. The market ensures that everyone gets what they deserve.
Was the Rise of Neoliberalism the Root Cause of Extreme Inequality?
The transformation of currency seems to be inevitable - despite what many still consider as the fad of Bitcoin. Distributed Ledger technologies - are in their infancy. This is an interesting signal of the emergence of institutional uses.
Banks to invest around $50 million in digital cash settlement project
Several of the world’s largest banks are in the process of investing around $50 million to create a digital cash system using blockchain technology to settle financial transactions, according to people familiar with the plans.
The previously disclosed project, known as the "utility settlement coin," was first proposed by Swiss bank UBS Group AG and London-based technology startup Clearmatics in 2015. It aims to develop a system to make clearing and settlement in financial markets more efficient.
Around a dozen banks are investing in a new entity called Fnality which would run the project, one of the people said.
The deal has not been finalized so details may change. The new system could launch in 2020, the person said.
Banks that had previously disclosed they were working on earlier phases of the project include UBS, Banco Santander, Bank of New York Mellon Corp, State Street Corp, Credit Suisse Group AG, Barclays PLC, HSBC Holdings Plc and Deutsche Bank AG.
This is another strong signal - not only the the situation in the US - but a condition that is relevant to many countries. We have to re-imagine the Internet as a public infrastructure providing fiber-optic connections everywhere - urban and rural. If not the possibility space of a global nervous system will be hijacked and enclosed by privateers.
the major ISPs have waged such a long war against net neutrality, because all of the federal provisions that involve curtailing the power of monopoly by promoting competition also empower the FCC to enforce net neutrality. Provisions that, again, have their roots in phone monopolies. Provisions that worked. This is also why the competitors to the major ISPs regularly support efforts to restore Title II regulation of the industry, because the 1996 Act’s point was to codify competition as law and helped enable their entry.
Broadband Monopolies Are Acting Like Old Phone Monopolies. Good Thing Solutions to That Problem Already Exist
The future of competition in high-speed broadband access looks bleak. A vast majority of homes only have their cable monopoly as their choice for speeds in excess of 100 mbps and small ISPs and local governments are carrying the heavy load of deploying fiber networks that surpass gigabit cable networks. Research now shows that these new monopolies have striking similarities to the telephone monopolies of old. But we don’t have to repeat the past; we’ve already seen how laws can promoted competition and broke monopolies.
In the United States, high-speed fiber deployment is low and slow. EFF decided to look into this problem, and we now have a research report by the Samuelson-Glushko Technology Law & Policy Clinic (TLPC) on behalf of EFF that details the history of our telecom competition policies, why they came into existence with the passage of the 1996 Telecommunications Act, and the FCC’s mistakes—starting in 2005—that eroded the law and has given us a high-speed broadband access monopoly that disproportionately impacts low income and rural Americans.
This is an important signal - that is not completely determined yet - but it outlines less about the future and much more about the current conditions that are being contested.
“It feels like a chunk of the Internet is gone or different. People feel the Internet is not as we knew it,” says Venkat Balasubramani, who runs a cyber law firm in Seattle.
The Splinternet Is Growing
“The Net interprets censorship as damage and routes around it,” said Internet pioneer John Gilmore in a 1993 Time magazine article about a then-ungoverned place called “cyberspace.” How times have changed.
In April, Sri Lankan authorities blocked its citizens’ access to social media sites like Facebook and YouTube following a major terrorist attack. Such censorship, once considered all but inconceivable, is now commonplace in a growing number of countries.
Russia, for instance, approved an “Internet sovereignty” law in May that gives the government broad power to dictate what its citizens can see online. And China is not just perfecting its “Great Firewall,” which blocks such things as searches for “Tiananmen Square” and the New York Times, but is seeking to export its top-down version of the web to countries throughout Southeast Asia.
This phenomenon, colloquially called “splinternet,” whereby governments seek to fence off the World Wide Web into a series of national Internets, isn’t new. The term, also known as cyberbalkanization, has been around since the 1990s. But lately the rupturing has accelerated, as companies censor their sites to comply with national rules and governments blot out some sites entirely.
Given the role of the Internet - this is an interesting weak signal that seems to run counter to the dominant narrative today.
“The cellphones changed how drugs were dealt,” Edlund told me. In the ’80s, turf-based drug sales generated violence as gangs attacked and defended territory, and also allowed those who controlled the block to keep profits high.
The cellphone broke the link, the paper claims, between turf and selling drugs. “It’s not that people don’t sell or do drugs anymore,” Edlund explained to me, “but the relationship between that and violence is different.”
The Collapsing Crime Rates of the ’90s Might Have Been Driven by Cellphones
Did technology disrupt the drug game, too?
It’s practically an American pastime to blame cellphones for all sorts of societal problems, from distracted parents to faltering democracies. But the devices might have also delivered a social silver lining: a de-escalation of the gang turf wars that tore up cities in the 1980s.
The intriguing new theory suggests that the arrival of mobile phones made holding territory less important, which reduced intergang conflict and lowered profits from drug sales.
Lena Edlund, a Columbia University economist, and Cecilia Machado, of the Getulio Vargas Foundation, lay out the data in a new National Bureau of Economic Research working paper. They estimate that the diffusion of smartphones could explain 19 to 29 percent of the decline in homicides seen from 1990 to 2000.
The advances in our understanding of human-microbe relationships has been part of progress in domesticating DNA - this is a good summary of the last decade with indications of the future. It is also a great signal of the future of transdisciplinary knowledge domains.
Developing a new conceptual framework and applying it to the human microbiome will require much more collaboration between investigators working across disparate fields, including evolution, ecology, microbiology, biomedicine and computational biology. It will also demand significant changes in how data and other resources are distributed between scientists, and in how currently disparate areas of microbiome research inter-relate.
Priorities for the next 10 years of human microbiome research
The dream of microbiome-based medicine requires a fresh approach — an ecological and evolutionary understanding of host-microbe interactions
Over the past decade, more than US$1.7 billion has been spent on human microbiome research. Major projects are under way in the United States, the European Union, China, Canada, Ireland, South Korea and Japan.
This investment has confirmed the importance of the microbiome to human health and development. It is now known, for instance, that newborns receive essential microorganisms from their mothers1. Moreover, the sugars in breast milk that infants cannot digest nourish babies’ developing microbiomes2, which in turn shape their immune systems3.
Now is a good moment for reflection. The biggest investment made (around $1 billion) comes from the United States. Some 20% of this has gone to two phases of the Human Microbiome Project (HMP), which is creating the research resources needed for studying the human microbiome (see ‘Big spend’). A review4 of what that decade of investment in human microbiome research has achieved was published in February (see ‘Big wins’). And findings from the second phase of the HMP are published in this week’s Nature
In my view, most of the research so far has placed too much emphasis on cataloguing species names. We’ve been characterizing the human microbiome as if it were a relatively fixed property to be mapped and manipulated — one that is separate from the rest of the body. In fact, I think that interventions that could help to treat conditions such as diabetes, cancer and autoimmune diseases will be discovered only if we move beyond species catalogues and begin to understand the complex and mutable ecological and evolutionary relationships that microbes have with each other and with their hosts.
This is a very important signal - we must remember that evolution is a constant process that selects whatever survives. It doesn’t sustain any particular entity but will keep using systems that work for survival.
Resistance can and does evolve when bacteria are persistently exposed to a new antibiotic they have never encountered. Let's call this the old-fashioned evolutionary road. Second, when bacteria are exposed to a novel antibiotic and are in contact with bacteria already resistant to this antibiotic, it is just a matter of time before they get cozy and trade genes. And, importantly, once genes have been packaged for trading, they become easier and easier to share. Bacteria then meet other bacteria, which meet more bacteria, until one of them eventually meets you.
studying their DNA sequences, we were able to show that these resistant DHPS enzymes had been present in these two groups of bacteria for at least 500 million years. Yet sulfa drugs were first synthesized in the 1910s. How could resistance be around 500 million years ago? And how did these resistance genes find their way into the disease-causing bacteria plaguing hospitals worldwide?
Antibiotic resistance is not new – it existed long before people used drugs to kill bacteria
Antibiotics changed the world in more ways than one. They made surgery routine and childbirth safer. Intensive farming was born. For decades, antibiotics have effectively killed or stopped the growth of disease-causing bacteria. Yet it was always clear that this would be a rough fight. Bacteria breed fast, and that means that they adapt rapidly. The emergence of antibiotic resistance was predicted by none other than Sir Alexander Fleming, the discoverer of penicillin, less than a year after the first batch of penicillin was mass produced.
Yet, contrary to popular belief, antibiotic resistance did not evolve recently, or in response to our use and misuse of antibiotics in humans and animals. Antibiotic resistance first evolved millions of years ago, and in the most mundane of places.
Most antibiotics are naturally produced by bacteria living in soil. They produce these deadly chemical compounds to fend off competing species. Yet, in the long game that is evolution, competing species are unlikely to sit idly by. Any mutant capable of tolerating a minimal quantity of the antibiotic will have a survival advantage and will be selected for—over generations this will produce organisms that are highly resistant.
So it's a foregone conclusion that antibiotic resistance, for any antibiotic researchers might ever discover, is likely already out there. Yet people keep talking about the evolution of antibiotic resistance as a recent phenomenon. Why?
Another important signal - still only partial and scary - the interesting research question is whether these bacteria will begin to metabolize plastics and are there beneficial bacteria that are also colonizing plastics.
The scientists then used sequence analysis to investigate the complex communities on the microplastics. Around 500 different species of eukaryotes were present on the tiny particles.
Microorganisms on microplastics
Organisms can grow on microplastics in freshwater ecosystems. The findings of a recent study undertaken by researchers from the Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB) and the Leibniz Institute for Baltic Sea Research, Warnemünde (IOW) show that the potentially toxin-producing plankton species Pfiesteria piscicida prefers to colonise plastic particles, where they are found in 50 times higher densities than in the surrounding water of the Baltic Sea and densities about two to three times higher than on comparable wood particles floating in the water.
A plastic item weighing one gram floating in the sea can harbour more living organisms than a thousand litres of surrounding seawater. To date, little research has been conducted to determine the extent to which microorganisms colonise microplastics in brackish ecosystems, and which species dominate such populations. A team of limnologists has investigated the natural colonisation of polyethylene (PE) and polystyrene (PS) microplastics by eukaryotic microorganisms. Examples of eukaryotic microorganisms include plankton species that—unlike bacteria and viruses—are single-celled.
Top colonisers of microplastics potentially have an adverse effect on animal and human health
The dinoflagellate Pfiesteria piscicida, a potentially toxic plankton species, headed the top 20 microorganisms on microplastics. It reached densities about fifty times as high as in the surrounding water and about two to three times as high as on comparable wood particles. Its name means the "fish killer—after all, this pathogen may damage the skin of fish by producing toxins. The mass production of these toxins may present a serious threat to human and animal health.
This is a weak but worthy signal of the inevitable emergence of mind-environment interfaces - that will also likely be ubiquitous. Of course this particular DARPA initiative is no guarantee of success - but the importance is the creation of an investment field - for example the birthing of the Internet or the kickstarting of self-driving cars.
the program is seeking technologies that can read and write to brain cells in just 50 milliseconds round-trip, and can interact with at least 16 locations in the brain at a resolution of 1 cubic millimeter (a space that encompasses thousands of neurons).
DARPA Funds Ambitious Brain-Machine Interface Program
The N3 program aims to develop wearable devices that let soldiers to communicate directly with machines.
DARPA’s Next-Generation Nonsurgical Neurotechnology (N3) program has awarded funding to six groups attempting to build brain-machine interfaces that match the performance of implanted electrodes but with no surgery whatsoever.
By simply popping on a helmet or headset, soldiers could conceivably command control centers without touching a keyboard; fly drones intuitively with a thought; even feel intrusions into a secure network. While the tech sounds futuristic, DARPA wants to get it done in four years.
“It’s an aggressive timeline,” says Krishnan Thyagarajan, a research scientist at PARC and principal investigator of one of the N3-funded projects. “But I think the idea of any such program is to really challenge the community to push the limits and accelerate things which are already brewing. Yes, it’s challenging, but it’s not impossible.”
The N3 program fits right into DARPA’s high-risk, high-reward biomedical tech portfolio, including programs in electric medicine, brain implants and electrical brain training. And the U.S. defense R&D agency is throwing big money at the program: Though a DARPA spokesperson declined to comment on the amount of funding, two of the winning teams are reporting eye-popping grants of $19.48 million and $18 million.
The ‘Hard Problem’ is still beyond our grasp but here’s an interesting signal of one of the horizons of theory concerning the question of how does morality work?. One key struggle is making a distinction between a disembodied Ethics versus a biocultural embodiment of relationships.
The neurobiology of conscience
An exploration of the science behind our morality from philosopher Patricia Churchland is illuminating and grounded, finds Nicholas A. Christakis.
What is our conscience, and where does it come from? In her highly readable Conscience, the philosopher Patricia Churchland argues that “we would have no moral stance on anything unless we were social”.
That we have a conscience at all relates to how evolution has shaped our neurobiology for social living. Thus, we judge what is right or wrong using feelings that urge us in a general direction and judgement that shapes these urges into actions. Such judgement typically reflects “some standard of a group to which the individual feels attached”. This idea of conscience as a neurobiological capacity for internalizing social norms contrasts with strictly philosophical accounts of how and why we tell right from wrong.
There is a strand of thought in evolutionary biology (advanced, for instance, by the theorist Bret Weinstein) that the capacity for moral debate itself has a social function, binding groups regardless of the topics contested or their abstract moral ‘rightness’. Moreover, many of our moral rules — such as the idea that we should not betray our friends or abandon our children — have clearly been shaped by natural selection to optimize our capacity to live in groups. Other rules, for instance regarding the correctness of reciprocity, are similar: we feel quite intensely and innately that if someone gives us a gift of food, we should reciprocate on a future occasion.
Not yet ready for primetime - this is a good signal of the trajectory and pace of bringing autonomous automation to the world of logistics. A key implication is the transformation of shopping - and a hint that we will have to re-imagine the use of shopping malls (they won’t disappear - but will they be were we shop?). There is a short video to illustrate.
Ford Self-Driving Vans Will Use Legged Robots to Make Deliveries
Agility Robotics’ Digit will bring packages from a delivery vehicle to your front door
Ford is adding legs to its robocars—sort of.
The automaker is announcing today that its fleet of autonomous delivery vans will carry more than just packages: Riding along with the boxes in the back there will be a two-legged robot.
Digit, Agility Robotics’ humanoid unveiled earlier this year on the cover of IEEE Spectrum, is designed to move in a more dynamic fashion than regular robots do, and it’s able to walk over uneven terrain, climb stairs, and carry 20-kilogram packages.
Ford says in a post on Medium that Digit will bring boxes from the curb all the way to your doorstep, covering those last few meters that self-driving cars are unable to. The company plans to launch a self-driving vehicle service in 2021.
Another signal.
Self-driving trucks begin mail delivery test for U.S. Postal Service
The U.S. Postal Service on Tuesday started a two-week test transporting mail across three Southwestern states using self-driving trucks, a step forward in the effort to commercialize autonomous vehicle technology for hauling freight.
San Diego-based startup TuSimple said its self-driving trucks will begin hauling mail between USPS facilities in Phoenix and Dallas to see how the nascent technology might improve delivery times and costs. A safety driver will sit behind the wheel to intervene if necessary and an engineer will ride in the passenger seat.
If successful, it would mark an achievement for the autonomous driving industry and a possible solution to the driver shortage and regulatory constraints faced by freight haulers across the country.
The pilot program involves five round trips, each totaling more than 2,100 miles (3,380 km) or around 45 hours of driving. It is unclear whether self-driving mail delivery will continue after the two-week pilot.
This article has a couple of interesting signals - alternative renewable energy and flying cars. There is a short video and the prototype is beautiful.
HYDROGEN FUEL CELL FLYING CAR HAS A RANGE OF 400 MILES
Skai’s The Limit
Massachusetts startup Alaka’i has designed a flying car that the company touts as the “first air mobility vehicle powered by hydrogen fuel cells” in a flashy announcement video. The big promise: ten times the power of conventional lithium batteries without compromising on carbon emissions.
The hydrogen fuel cells give the five-passenger Skai a maximum range of 400 miles (640 km) with a flight time of up to four hours.
The company has been working on the design for four years, and is hoping to receive Federal Aviation Administration certification before the end of 2020.
Six rotors enable vertical take-off and landing, enabling the vehicle to essentially fly like a massive drone. An “Airframe Parachute” ensures that the Skai doesn’t simply drop out of the air in the case of a propeller failure.
A great signal of the emerging phase transition in global energy geopolitics.
Renewable Energy Costs Take Another Tumble, Making Fossil Fuels Look More Expensive Than Ever
The cost of renewable energy has tumbled even further over the past year, to the point where almost every source of green energy can now compete on cost with oil, coal and gas-fired power plants, according to new data released today.
Hydroelectric power is the cheapest source of renewable energy, at an average of $0.05 per kilowatt hour (kWh), but the average cost of developing new power plants based on onshore wind, solar photovoltaic (PV), biomass or geothermal energy is now usually below $0.10/kWh. Not far behind that is offshore wind, which costs close to $0.13/kWh.
These figures are global averages and it is worth noting that the cost of individual projects can vary hugely – the cost of producing electricity from a biomass energy plant, for example, can range from as low as $0.05/kWh to a high of almost $0.25/kWh.
However, all these fuel types are now able to compete with the cost of developing new power plants based on fossil fuels such as oil and gas, which typically range from $0.05/kWh to over $0.15/kWh.
Now this is a signal that maybe fake news right now - but it is only time before something like this hits not just the porn world - but any social domain. Let’s not forget the added capability of ‘deep fakes’. Even if we attempt to legislate this technology - it will only drive it underground. This signals a world of radical transparency.
DIY Facial Recognition for Porn Is a Dystopian Disaster
Someone is making dubious claims to have built a program for detecting faces in porn and cross-referencing against social media, with 100,000 identified so far.
Someone posting on Chinese social network Weibo claims to have used facial recognition to cross-reference women’s photos on social media with faces pulled from videos on adult platforms like Pornhub.
In a Monday post on Weibo, the user, who says he's based in Germany, claimed to have “successfully identified more than 100,000 young ladies” in the adult industry “on a global scale.”
To be clear, the user has posted no proof that he’s actually been able to do this, and hasn’t published any code, databases, or anything else besides an empty GitLab page to verify this is real. When Motherboard contacted the user over Weibo chat, he said they will release “database schema” and “technical details” next week, and did not comment further.
Still, his post has gone viral in both China on Weibo and in the United States on Twitter after a Stanford political science PhD candidate tweeted them with translations, which Motherboard independently verified. This has led prominent activists and academics to discuss the potential implications of the technology.
Another signal of emerging virtual realities.
Microsoft's Latest Text-to-Speech AI Generates Realistic Speech
In another leap forward for text-to-speech AI, Microsoft has announced the development of a new platform that can create seamless and natural speech samples.
Following Google’s introduction of its AI speech translation tool Translatotron, Microsoft has unveiled its latest text-to-speech AI system that can reportedly generate realistic speech. The technology was developed in partnership with a team of Chinese researchers.
In their paper published on GitHub, the team reported that the TTS AI utilizes two key components – a transformer and denoising auto-encoder – to work. A transformer is a type of neural architecture developed by scientists from Google Brain which emulates our own neurons. It analyzes inputs and outputs like synaptic links, making the TTS AI system process complex sentences efficiently.
The denoising auto-encoder is a neural network capable of reconstructing corrupted data. It operates on unsupervised learning, a branch of machine learning that gathers knowledge from unclassified, unlabeled, and uncategorized data sets.
With the help of these neural network systems, Microsoft’s TTS AI was able to reach 99.84 percent word intelligibility accuracy and 11.7 percent phoneme error rate for its automatic speech recognition.
And the signals of the increasing competence of narrow AI to outdo humans in particular types of work continue.
ALGORITHM PREDICTS PATIENT SURVIVAL IN INTENSIVE CARE
A new algorithm uses up to 23 years of individual disease history to predict patients’ chances of survival in the intensive care unit.
Determining which treatment is best for each intensive care patient is a great challenge and the existing methods that doctors and nurses use could be much better.
The new algorithm appears in the journal Lancet Digital Health.
“We have used Danish health data in a new way, using an algorithm to analyze file data from the individual patient’s disease history. The Danish National Patient Registry contains data on the disease history of millions of Danes, and in principle the algorithm is able to draw on the history of the individual citizen of benefit to the individual patient in connection with treatment,” says professor Søren Brunak from the Novo Nordisk Foundation Center for Protein Research at the University of Copenhagen.
230,000 INTENSIVE CARE PATIENTS
To develop the algorithm, the researchers used data on more than 230,000 patients admitted to intensive care units in Denmark in the period 2004-2016. In the study, the algorithm analyzed the individual patient’s disease history, covering as much as 23 years.