Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.) that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.
Many thanks to those who enjoy this. ☺
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.
“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9
Content
Quotes:
Articles:
The challenge of avoiding burnout reminds me of that old witticism, "It's easy to quit smoking. I've done it thousands of times" (the provenance is unclear). Burnout is inseparable from the industrial work ethic, and the industrial work ethic does in fact seem as terrible as smoking from certain perspectives. The very act of sitting for hours at a desk inspired standing desk evangelist James Levine to remark that "sitting is the new smoking." Yet, in other contexts, like meditation, sitting still for long periods has an entirely different narrativization. Why is sitting around for hours read as burnout in one context, and self-actualization in another? The difference, I believe, has little to do with surface behaviors like whether you are sitting, standing, or pumping iron between bouts of brogramming on a surfboard. Whether work leads to burnout or self-actualization is related to our ability to answer two questions about what we are doing: is it necessary, and is it meaningful?
Industrial work requires daily limits, weekends off, and vacations because burnout conditions aren't the exception, they are the default.
Industrial work makes survival a highly predictable matter, unlike say in hunting or foraging. One unit of work buys you one unit of continued survival.
The cost is that the connection between what you do and why you do it is no longer innate. You need an ideological theory of how the world works to make that connection.
When the outcome of work involves the variability of the environment in a consequential way, meaning and necessity are coupled. Life can be interesting without ideology.
What happens to you has variety, novelty, and unpredictability, and actually makes a difference to whether you die, and what it means if you live.
Under such conditions, work keeps you both alive, and interested in remaining alive. You cannot burn out or get bored.
"any sufficiently advanced kind of work is indistinguishable from play."
Work-life balance is not about actually integrating meaning and necessity, or interestingness and survivability. That is fundamentally not possible within the industrial order.
It's about not letting either survivability debt or interestingness debt accumulate to the point that you either die from inability to continue living, or kill yourself because you've lost the will to live.
The Cyberpaleo Ethic and the Spirit of Post-Capitalism
Now, you might think that same-day delivery is a slender reed on which to hang dreams of a better future, one more technology solution to a “first world problem.” But the revolution in logistics will spread far beyond more quickly meeting the needs of people who already have everything. Zipline, the on-demand drone delivery startup, is a good example. They aren’t delivering consumer goods, but vital blood supplies and medicine, leapfrogging the need for 20th century roads and hospital infrastructure, bringing life-saving blood to clinics and small local hospitals anywhere in Rwanda within 15 or 20 minutes. (Post-partum hemhorrage is one of the leading causes of death in Rwanda because keeping every blood type available within quick reach of everyone who needs it has been prohibitively expensive.)
Do More! What Amazon Teaches Us About AI and the “Jobless Future”
This is a great HBR article by John Hagel and John Seely Brown - highlighting one of the most fundamental design principles for organizational architecture in the 21st Century. This is also vital for anyone interested in Knowledge.
We believe there still is a compelling rationale for large institutions, but it’s a very different one from scalable efficiency. It’s scalable learning. In a world that is more rapidly changing and where our needs are evolving at an accelerating rate, the institutions that are most likely to thrive will be those that provide an opportunity to learn faster together.
We’re not talking about sharing existing knowledge more effectively (although there’s certainly a lot of opportunity there). In a world of exponential change, existing knowledge depreciates at an accelerating rate. The most powerful learning in this kind of world involves creating new knowledge. This kind of learning does not occur in a training room; it occurs on the job, in the day-to-day work environment.
Scalable learning not only helps people inside the institution learn faster. It also scales learning by connecting with others outside the institution and building deep, trust-based relationships that can help all participants to learn faster by working together.
Great Businesses Scale Their Learning, Not Just Their Operations
Ronald Coase nailed it back in 1937 when he identified scalable efficiency as the key driver of the growth of large institutions. It’s far easier and cheaper to coordinate the activities of a large number of people if they’re within one institution rather than spread out across many independent organizations.
But here’s the challenge. Scalable efficiency works best in stable environments that are not evolving rapidly. It also assumes that the constituencies served by these institutions will settle for standardized products and services that meet the lowest common denominator of need.
Today we live in a world that is increasingly shaped by exponentially improving digital technologies that are accelerating change, increasing uncertainty, and driving performance pressure on a global scale. Consumers are less and less willing to settle for the standardized offerings that drove the success of large institutions in the past. Our research into the long-term decline of return on assets for all public companies in the US from 1965 to today (it’s gone down by 75%) is just one indicator of this pressure. Another indicator is the shrinking life span of companies on the S&P 500. A third is the declining rates of trust indicated by the Edelman Trust Barometer — as the gap grows between what we want and expect and what we receive, our trust in the ability of these institutions to serve our needs erodes.
To reverse these trends, we need to move beyond narrow discussions of product or service innovation, or even more sophisticated conversations about process innovation or business model innovation. Instead, we need to talk about institutional innovation, or re-thinking the rationale for why we have institutions to begin with.
I have long believed that governments should make a bold move to open source software for as many of our technology and software needs. This would truly enable a scaling of learning as all government IT staff and scientific staff would inevitably also become developers of features and applications that could then be used by everyone. Here’s a recent article supporting this position.
The report found that GFDRR has, conservatively, achieved at least a 200% return on investment in its open source software efforts. While meeting project goals, the GeoNode software also became a popular tool among dozens of organizations around the world from the public sector, private sector, academia and civil society. This virtuous circle of win-win relationships can be summarized best by Paul Ramsey: “You get what you pay for, everyone gets what you pay for, and you get what everyone pays for.”
Leveraging Open Source as a Public Institution — New analysis reveals significant returns on investment in open source technologies
Examples abound of leading tech companies that have adopted open source strategy and contribute actively to open source tools and communities. Google, for example, has been a long contributor to open source with projects – such as its popular mobile operating system, Android – and recently launched a directory of the numerous projects. Amazon Web Services (AWS) is another major advocate, running most of its cloud services using open source software, and is adopting an open source strategy to better contribute back to the wider community. But can, and should, public institutions embrace an open source philosophy?
In fact, organizations of all types are increasingly taking advantage of the many benefits open source can bring in terms of cost-effectiveness, better code, lower barriers of entry, flexibility, and continual innovation. Clearly, these many benefits not only address the many misconceptions and stereotypes about open source software, but are also energizing new players to actively participate in the open source movement. Organizations like the National Geospatial-Intelligence Agency (NGA) have been systematically adopting and leveraging open sources best practices for their geospatial technology, and even the U.S. Federal Government has also adopted a far-reaching open source policy to spur innovation and foster civic engagement.
So, how can the World Bank – an institution that purchases and develops a significant amount of software – also participate and contribute to these communities? How can we make sure that, in the era of the ‘knowledge Bank’, digital and re-usable public goods (including open source software, data, and research) are available beyond single projects or reports?
At the Global Facility for Disaster Reduction and Recovery (GFDRR), the Open Data for Resilience Initiative(OpenDRI) team just reviewed more than seven years of strategic investment in open source in a new publication called OpenDRI and GeoNode: A Case Study for Institutional Investment in Open Source. The publication examines the history of the GeoNode software project – a web-based geospatial management and visualization tool – from its inception, tracing how GFDRR contributed to the project’s success. The report also examines the technical and the social aspects of creating and participating in an open source ecosystem, paying particular attention to how OpenDRI’s investment strategy encouraged the arrival of outside institutional investment from both non-profit and for-profit organizations.
This is a great signal - indicating the inevitable necessity of making all knowledge (especially recent research) openly accessible to all if we are going to scale learning and keep up with innovation to solve emerging problems.
In dramatic statement, European leaders call for ‘immediate’ open access to all scientific papers by 2020
In what European science chief Carlos Moedas calls a "life-changing" move, E.U. member states today agreed on an ambitious new open-access (OA) target. All scientific papers should be freely available by 2020, the Competitiveness Council—a gathering of ministers of science, innovation, trade, and industry—concluded after a 2-day meeting in Brussels. But some observers are warning that the goal will be difficult to achieve.
The OA goal is part of a broader set of recommendations in support of open science, a concept that also includes improved storage of and access to research data. The Dutch government, which currently holds the rotating E.U. presidency, had lobbied hard for Europe-wide support for open science, as had Carlos Moedas, the European commissioner for research and innovation.
"The time for talking about Open Access is now past. With these agreements, we are going to achieve it in practice," the Dutch state secretary for education, culture, and science, Sander Dekker, added in a statement.
"The means are still somewhat vague but the determination to reach the goal of having all scientific articles freely accessible by 2020 is welcome," says long-time OA advocate Stevan Harnad of the University of Québec in Canada. The decision was also welcomed by the League of European Research Universities (LERU), which called today's conclusions"a major boost for the transition towards an Open Science system."
Someone once said - never let a crisis go to waste. This is an important key to seeing learning opportunities and basically opportunity everywhere. In a world that is uncertain survival is ultimately based on Response-Ability. This article is a great signal of embracing all the affordances in continually change world to do more than just survive but continued to adapt to flourish.
The Dutch Have Solutions to Rising Seas. The World Is Watching.
In the waterlogged Netherlands, climate change is considered neither a hypothetical nor a drag on the economy. Instead, it’s an opportunity.
ROTTERDAM, the Netherlands — The wind over the canal stirred up whitecaps and rattled cafe umbrellas. Rowers strained toward a finish line and spectators hugged the shore. Henk Ovink, hawkish, wiry, head shaved, watched from a V.I.P. deck, one eye on the boats, the other, as usual, on his phone.
Mr. Ovink is the country’s globe-trotting salesman in chief for Dutch expertise on rising water and climate change. Like cheese in France or cars in Germany, climate change is a business in the Netherlands. Month in, month out, delegations from as far away as Jakarta, Ho Chi Minh City, New York and New Orleans make the rounds in the port city of Rotterdam. They often end up hiring Dutch firms, which dominate the global market in high-tech engineering and water management.
That’s because from the first moment settlers in this small nation started pumping water to clear land for farms and houses, water has been the central, existential fact of life in the Netherlands, a daily matter of survival and national identity. No place in Europe is under greater threat than this waterlogged country on the edge of the Continent. Much of the nation sits below sea level and is gradually sinking. Now climate change brings the prospect of rising tides and fiercer storms.
From a Dutch mind-set, climate change is not a hypothetical or a drag on the economy, but an opportunity. While the Trump administration withdraws from the Paris accord, the Dutch are pioneering a singular way forward.
It is, in essence, to let water in, where possible, not hope to subdue Mother Nature: to live with the water, rather than struggle to defeat it. The Dutch devise lakes, garages, parks and plazas that are a boon to daily life but also double as enormous reservoirs for when the seas and rivers spill over. You may wish to pretend that rising seas are a hoax perpetrated by scientists and a gullible news media. Or you can build barriers galore. But in the end, neither will provide adequate defense, the Dutch say.
And what holds true for managing climate change applies to the social fabric, too. Environmental and social resilience should go hand in hand, officials here believe, improving neighborhoods, spreading equity and taming water during catastrophes. Climate adaptation, if addressed head-on and properly, ought to yield a stronger, richer state.
This is a very interesting project - an experiment in a new form of peer-review. Another signal of an inevitable change in scientific publishing.
Crowd-based peer review can be good and fast
Confidential feedback from many interacting reviewers can help editors make better, quicker decisions
When it works — and that’s much of the time — peer review is a wondrous thing. But all too often, it can be an exercise in frustration for all concerned. Authors are on tenterhooks to learn of potentially career-changing decisions. Generous peer-reviewers are overwhelmed. And editors are condemned to doggedly sending reminders weeks after deadlines pass. When the evaluation finally arrives, it might be biased, inaccurate or otherwise devoid of insight. As an author and latterly editor-in-chief of Synlett, a chemical-synthesis journal, I’ve seen too many ‘reviews’ that say little more than “this manuscript is excellent and should be published” or “this manuscript clearly doesn’t reach the standards for your journal”.
In May last year, we began to upload manuscripts on to the platform one at a time, and were impressed with the overwhelming number of responses collected after only a few days. This January, we put up two manuscripts simultaneously and gave the crowd 72 hours to respond. Each paper received dozens of comments that our editors considered informative. Taken together, responses from the crowd showed at least as much attention to fine details, including supporting information outside the main article, as did those from conventional reviewers.
So far, we have tried crowd reviewing with ten manuscripts. In all cases, the response was more than enough to enable a fair and rapid editorial decision. Compared with our control experiments, we found that the crowd was much faster (days versus months), and collectively provided more-comprehensive feedback.
This is an entertaining 17 min TED talk that discusses what may be one of the inevitable consequences of domesticating DNA.
Will our kids be a different species?
Throughout human evolution, multiple versions of humans co-existed. Could we be mid-upgrade now? Juan Enriquez sweeps across time and space to bring us to the present moment — and shows how technology is revealing evidence that suggests rapid evolution may be under way.
This is a very interesting proposal that will provide a 21st century paradigm of life and evolution - before the century is out this network-mycelial understanding of evolving life form will also be augmented with the gene-flows of horizontal gene transfers - and of course new lifeform created via our domestication of DNA. This proposal is also a weak signal toward a new concept of ‘individual’ as a social being.
Scientists propose a new paradigm that paints a more inclusive picture of the evolution of organisms and ecosystems
In 1859, Charles Darwin included a novel tree of life in his trailblazing book on the theory of evolution, On the Origin of Species. Now, scientists from Rutgers University-New Brunswick and their international collaborators want to reshape Darwin's tree.
A new era in science has emerged without a clear path to portraying the impacts of microbes across the tree of life. What's needed is an interdisciplinary approach to classifying life that incorporates the countless species that depend on each other for health and survival, such as the diverse bacteria that coexist with humans, corals, algae and plants, according to the researchers, whose paper is published online today in the journal Trends in Ecology and Evolution.
"In our opinion, one should not classify the bacteria or fungi associated with a plant species in separate phylogenetic systems (trees of life) because they're one working unit of evolution," said paper senior author Debashish Bhattacharya, distinguished professor, Department of Ecology, Evolution and Natural Resources, in the Rutgers School of Environmental and Biological Studies. "The goal is to transform a two-dimensional tree into one that is multi-dimensional and includes biological interactions among species."
A tree of life has branches showing how diverse forms of life, such as bacteria, plants and animals, evolved and are related to each other. Much of the Earth's biodiversity consists of microbes, such as bacteria, viruses and fungi, and they often interact with plants, animals and other hosts in beneficial or harmful ways. Forms of life that are linked physically and evolve together (i.e. are co-dependent) are called symbiomes, the paper says.
The authors propose a new tree of life framework that incorporates symbiomes. It's called SYMPHY, short for symbiome phylogenetics. The idea is to use sophisticated computational methods to paint a much broader, more inclusive picture of the evolution of organisms and ecosystems. Today's tree of life fails to recognize and include symbiomes. Instead, it largely focuses on individual species and lineages, as if they are independent of other branches of the tree of life, the paper says.
The authors believe that an enhanced tree of life will have broad and likely transformative impacts on many areas of science, technology and society. These include new approaches to dealing with environmental issues, such as invasive species, alternative fuels and sustainable agriculture; new ways of designing and engineering machinery and instruments; enlightened understanding of human health problems; and new approaches to drug discovery.
Another interesting signal toward the domestication of DNA.
Designer Viruses Stimulate the Immune System to Fight Cancer
Swiss scientists from the University of Geneva (UNIGE), Switzerland, and the University of Basel have created artificial viruses that can be used to target cancer. These designer viruses alert the immune system and cause it to send killer cells to help fight the tumor. The results, published in the journal Nature Communications, provide a basis for innovative cancer treatments.
Immunotherapies have been successfully used to treat cancer for many years; they “disinhibit” the body’s defense system and so also strengthen its half-hearted fight against cancer cells. Stimulating the immune system to specifically and wholeheartedly combat cancer cells, however, has remained a distant goal. Researchers have now succeeded in manufacturing innovative designer viruses that could do exactly that. Their teams were lead by Professor Doron Merkler from the Department of Pathology and Immunology of the Faculty of Medicine, UNIGE, and Professor Daniel Pinschewer from the Department of Biomedicine, University of Basel.
The researchers built artificial viruses based on lymphocytic choriomeningitis virus (LCMV), which can infect both rodents and humans. Although they were not harmful for mice, they did release the alarm signals typical of viral infections. The virologists also integrated certain proteins into the virus that are otherwise found only in cancer cells. Infection with the designer virus enabled the immune system to recognize these cancer proteins as dangerous.
The unique combination of alarm signals and the cancer cell protein stimulated the immune system to create a powerful army of cytotoxic T-lymphocytes, also known as killer cells, which identified the cancer cells through their protein and successfully destroyed them.
Here is another signal - weaker - but indicative of a continued trajectory.
Designer protein halts flu
There’s a new weapon taking shape in the war on flu, one of the globe’s most dangerous infectious diseases. Scientists have created a designer protein that stops the influenza virus from infecting cells in culture and protects mice from getting sick after being exposed to a heavy dose of the virus. It can also be used as a sensitive diagnostic. And although it isn’t ready as a treatment itself, the protein may point the way to future flu drugs, scientists say.
In 2011, researchers led by David Baker, a computational biologist at the University of Washington in Seattle, created a designer protein that binds HA’s stem, which prevented viral infection in cell cultures. But because the stem is often shrouded by additional protein, it can be hard for drugs to reach it.
Now, Baker’s team has designed proteins to target HA’s more exposed head group. They started by analyzing x-ray crystal structures that show in atomic detail how flu-binding antibodies in people grab on to the three sugar-binding sites on HA’s head. They copied a small portion of the antibody that wedges itself into one of these binding sites. They then used protein design software called Rosetta to triple that head-binding section, creating a three-part, triangular protein, which the computer calculated would fit like a cap over the top of HA’s head group. Next, they synthesized a gene for making the protein and inserted it into bacteria, which cranked out copies for them to test.
In the test, Baker’s team immobilized copies of the protein on a paperlike material called nitrocellulose. They then exposed it to different strains of the virus, which it grabbed and held. “We call it flu glue, because it doesn’t let go,” Baker says. In other experiments, the protein blocked the virus from infecting cells in culture, and it even prevented mice from getting sick when administered either 1 day before or after viral exposure, they report today in Nature Biotechnology.
A while ago a project called the human connectome project was started - emulating the previously successful human genome project. This article is definitely a signal for how that project in conjunction with algorithmic intelligence will enhance human capacity to understand how the brain function and eventually how to support its development.
“It’s going to be really important to use machine learning in the future to pull all these pieces of information together,” says Emerson. In addition to brain scan data, researchers could gather behavioral results, environmental exposures, and more. Once that is done, “we’re going to have a very good shot at really nailing this early prediction.” The paper is published today in the journal Science Translational Medicine.
AI Detects Autism in Infants (Again)
Back in February, we brought you news of a deep-learning algorithm able to predict autism in two-year-olds based on structural brain changes beginning at six months of age.
Now, the same group at the University of North Carolina has again applied machine learning to the goal of predicting autism, with equally impressive results. This time, instead of structural changes, they were able to detect changes in brain function of six-month-olds that predicted if the children would later develop autism.
The study is notable because there were no false positives—that is, all the children predicted to develop autism did.
There were a few misses, however. Of 59 6-month-old infants at high-risk for autism—meaning they had at least one sibling with autism—the algorithm correctly predicted 9 of 11 who later received a positive diagnosis.
By combining this functional analysis with the earlier structural results, it is very possible one could create a highly sensitive and accurate early diagnostic test for autism, says first author Robert Emerson of the Carolina Institute for Developmental Disabilities at UNC. And AI is going to be key to making that happen, he adds.
This is an an important idea - the need to validate that algorithmic intelligence is actually doing what it’s designers say it’s doing. Perhaps one of the institutional innovations necessary for the 21st century is an ‘Auditor General of AI’ - all forms for all domains of use.
Inspecting Algorithms for Bias
Courts, banks, and other institutions are using automated data analysis systems to make decisions about your life. Let’s not leave it up to the algorithm makers to decide whether they’re doing it appropriately.
It was a striking story. “Machine Bias,” the headline read, and the teaser proclaimed: “There’s software used across the country to predict future criminals. And it’s biased against blacks.”
ProPublica, a Pulitzer Prize–winning nonprofit news organization, had analyzed risk assessment software known as COMPAS. It is being used to forecast which criminals are most likely to reoffend. Guided by such forecasts, judges in courtrooms throughout the United States make decisions about the future of defendants and convicts, determining everything from bail amounts to sentences. When ProPublica compared COMPAS’s risk assessments for more than 10,000 people arrested in one Florida county with how often those people actually went on to reoffend, it discovered that the algorithm “correctly predicted recidivism for black and white defendants at roughly the same rate.” But when the algorithm was wrong, it was wrong in different ways for blacks and whites. Specifically, “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend.” And COMPAS tended to make the opposite mistake with whites: “They are much more likely than blacks to be labeled lower risk but go on to commit other crimes.”
Imagining the consequences of the self-driving vehicle is a challenge to some of our deeply held values regarding autonomy. However, the new vehicle may offer more comfort for many of the more private uses of the car.
It Could Be 10 Times Cheaper To Take Electric Robo-Taxis Than To Own A Car By 2030
https://www.fastcompany.com/40424452/it-could-be-10-times-cheaper-to-take-electric-robo-taxis-than-to-own-a-car-by-2030
A new report predicts that we’re on the edge of an incredibly rapid transition to an entirely new transportation system–where it will be so much cheaper and easier to not own a car, you’ll get rid of it as soon as you can.
Ask a typical industry analyst how long it might take Americans to take most trips in electric cars, and they might say the middle of the century–or later. The Energy Information Administration predicts that only about 3% of miles traveled in the U.S. in 2050 will happen in electric cars. But a new report suggests that it could happen in a little more than a decade.
Self-driving cars, the report predicts, will make ride hailing so cheap that the market will quickly transform–and because electric cars can last longer with heavy use, it will make economic sense for those cars to be electric, as well. By 2030, 95% of passenger miles traveled in the U.S. could be happening in on-demand, autonomous electric cars owned by fleets rather than individuals. The average family could be saving $5,600 a year on transportation. Also, the oil industry could collapse.
The linear, incremental growth of electric vehicles predicted by some analysts might be wrong. “This is not an energy transition,” says James Arbib, a London-based venture investor who co-authored the Rethinking Transporation report with serial entrepreneur and author Tony Seba. “This is a technology disruption. And technology disruptions happen in S-curves. They happen much more quickly.”
This is another good signal for the “Moore’s Law is Dead - Long Live Moore’s Law” file. Yes, we are nearing the physical limits of reducing the type of computer chips we’ve been making for decades - the horizon looms with new computational paradigms and approaches.
After years of research, the MIT team has come up with a way of performing these operations optically instead. “This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” Soljačić says. “We’ve demonstrated the crucial building blocks but not yet the full system.”
New system allows optical “deep learning”
Neural networks could be implemented more quickly using new photonic technology.
“Deep learning” computer systems, based on artificial neural networks that mimic the way the brain learns from an accumulation of examples, have become a hot topic in computer science. In addition to enabling technologies such as face- and voice-recognition software, these systems could scour vast amounts of medical data to find patterns that could be useful diagnostically, or scan chemical formulas for possible new pharmaceuticals.
But the computations these systems must carry out are highly complex and demanding, even for the most powerful computers.
Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. Their results appear today in the journal Nature Photonics in a paper by MIT postdoc Yichen Shen, graduate student Nicholas Harris, professors Marin Soljačić and Dirk Englund, and eight others.
The new approach uses multiple light beams directed in such a way that their waves interact with each other, producing interference patterns that convey the result of the intended operation. The resulting device is something the researchers call a programmable nanophotonic processor.
The result, Shen says, is that the optical chips using this architecture could, in principle, carry out calculations performed in typical artificial intelligence algorithms much faster and using less than one-thousandth as much energy per operation as conventional electronic chips.
Here’s a key breakthrough that signals profound progress in brain-computer interface and could herald many as yet unimagined enhancements.
Rather than simply disrupting neural circuits, the machine learning systems within the BCP are designed to interpret these signals and intelligently read and write to the surrounding neurons. These capabilities could be used to re-establish any degenerative or trauma-induced damage and perhaps write these memories and skills to other, healthier areas of the brain.
Researchers take major step forward in Artificial Intelligence
The long-standing dream of using Artificial Intelligence (AI) to build an artificial brain has taken a significant step forward, as a team led by Professor Newton Howard from the University of Oxford has successfully prototyped a nanoscale, AI-powered, artificial brain in the form factor of a high-bandwidth neural implant.
In collaboration with INTENT LTD, Qualcomm Corporation, Intel Corporation, Georgetown University and the Brain Sciences Foundation, Professor Howard’s Oxford Computational Neuroscience Lab in the Nuffield Department of Surgical Sciences has developed the proprietary algorithms and the optoelectronics required for the device. Rodents’ testing is on target to begin very soon.
This achievement caps over a decade of research by Professor Howard at MIT’s Synthetic Intelligence Lab and the University of Oxford, work that resulted in several issued US patents on the technologies and algorithms that power the device, the Fundamental Code Unit of the Brain (FCU), the Brain Code (BC) and the Biological Co-Processor (BCP) are the latest advanced foundations for any eventual merger between biological intelligence and human intelligence. Ni2o (pronounced “Nitoo”) is the entity that Professor Howard licensed to further develop, market and promote these technologies.
The Biological Co-Processor is unique in that it uses advanced nanotechnology, optogenetics and deep machine learning to intelligently map internal events, such as neural spiking activity, to external physiological, linguistic and behavioral expression. The implant contains over a million carbon nanotubes, each of which is 10,000 times smaller than the width of a human hair. Carbon nanotubes provide a natural, high-bandwidth interface as they conduct heat, light and electricity instantaneously updating the neural laces. They adhere to neuronal constructs and even promote neural growth. Qualcomm team leader Rudy Beraha commented, 'Although the prototype unit shown today is tethered to external power, a commercial Brain Co-Processor unit will be wireless and inductively powered, enabling it to be administered with a minimally-invasive procedures.'
A weak signal of the developing understanding of the brain - comes from mathematical models. While this approach is very far from significant validation - what it indicates is an emerging frontier of new paradigms for transforming how we model the brain’s structures and processes.
“We found a world that we had never imagined,” says lead researcher, neuroscientist Henry Markram from the EPFL institute in Switzerland. “There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to 11 dimensions.”
Scientists Discover That Our Brains Can Process the World in 11 Dimensions
Neuroscientists have used a classic branch of maths in a totally new way to peer into the structure of our brains. What they’ve discovered is that the brain is full of multi-dimensional geometrical structures operating in as many as 11 dimensions.
We’re used to thinking of the world from a 3-D perspective, so this may sound a bit tricky, but the results of this new study could be the next major step in understanding the fabric of the human brain – the most complex structure we know of.
This latest brain model was produced by a team of researchers from the Blue Brain Project, a Swiss research initiative devoted to building a supercomputer-powered reconstruction of the human brain.
The team used algebraic topology, a branch of mathematics used to describe the properties of objects and spaces regardless of how they change shape. They found that groups of neurons connect into ‘cliques’, and that the number of neurons in a clique would lead to its size as a high-dimensional geometric object.
Here’s a breakthrough that signals unprecedented capacity to capture images of reality in action. Although not ready for primetime - look for results in the next five years. There is a 4 sec video illustrating a photon on the move.
New video camera captures 5 trillion frames every second
High-speed filming could offer view of rapid chemistry, physics phenomena
A new video camera, the fastest by far, has set a staggering speed record. It films 5 trillion frames (equivalent to 5 trillion still images) every second, blowing away the 100,000 frames per second of high-speed commercial cameras. The device could offer a peek at never-before-seen phenomena, such as the blazingly fast chemical reactions that drive explosions or combustion.
Researchers at Lund University in Sweden demonstrated the camera’s speediness by filming particles of light traveling a distance as thin as a sheet of paper, then slowing down the trillionth-of-a-second journey to watch it.