Thursday, April 5, 2018

Friday Thinking 6 April 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9



Who’s the Next Yahoo?
In July 2016, 22 years after it began as a hobby for Stanford graduate students Jerry Yang and David Filo, Yahoo agreed to sell its core operating business to Verizon in what Forbeswriter Brian Solomon called “the saddest $5 billion deal in tech history.”
But few today recall how gloriously it began. Before Google or Facebook, Yahoo was the king of the internet.

Someone who does remember is Jeremy Ring, a top sales executive at Yahoo from 1996 to 2001. His recently published memoir, We Were Yahoo, details the improbable rise and precipitous fall of what was once the biggest internet company on the planet, worth $125 billion at its height.

“Our company was five years old,” Ring wrote. “We were worth more than Ford, Chrysler, and GM combined. Hell, we were worth more than Disney, Viacom, and News Corp combined. Each of those great American brands could have been swallowed up by us.”

The Glory That Was Yahoo

the relative size of social networks and other internet services relative to today’s nation-states — is rather trivial. I want to ask instead: “if today’s platforms were nation states, how best can they use technology, big data, and their knowledge of their users to improve the lives of those users in the same way that nations must aspire to improve the lives of their citizens?” and “if today’s nation states were internet services, how would they act differently?” I’m going to focus here on the second question. Asking this question also helps us to see things that nation states may be doing, now or in the future, as they catch up to the digital capabilities of consumer internet services.

It’s clear that even despite the current backlash against the great tech platforms, trust in them remains high. Expressions of distrust in the media notwithstanding, we haven’t yet seen any kind of exodus from digital technologies. Billions of people still share on Facebook, perform Google searches, buy from Amazon, transact with banks. Billions of people voluntarily give up their location at every moment to their mobile carrier, to their mobile phone operating system provider, and to location-based application providers. Through our phone calls, our text messages, and our emails, we reveal virtually every personal and business relationship. While there are applications for anonymous communication, few bother to adopt them.

For all their flaws, we trust the great tech platforms more than we trust our own government. Governments have much to learn from their successes, and also from their failures.

Why America Slept - We must renew trust in our institutions!

Clark not only rejects the idea of a sealed-off self—he dislikes it. He is a social animal: an eager collaborator, a convener of groups. The story he tells of his thinking life is crowded with other people: talks he’s been to, papers he’s read, colleagues he’s met, talks they’ve been to, papers they’ve read. Their lives and ideas are inextricable from his. His doors are open, his borders undefended. It is perhaps because he is this sort of person that he both welcomed the extended mind and perceived it in the first place. It is clear to him that the way you understand yourself and your relation to the world is not just a matter of arguments: your life’s experiences construct what you expect and want to be true.

Clark heard about the work of a Soviet psychologist named Lev Vygotsky. Vygotsky had written about how children learn with the help of various kinds of scaffolding from the world outside—the help of a teacher, the physical support of a parent. Clark started musing about the ways in which even adult thought was often scaffolded by things outside the head. There were many kinds of thinking that weren’t possible without a pen and paper, or the digital equivalent—complex mathematical calculations, for instance. Writing prose was usually a matter of looping back and forth between screen or paper and mind: writing something down, reading it over, thinking again, writing again. The process of drawing a picture was similar. The more he thought about these examples, the more it seemed to him that to call such external devices “scaffolding” was to underestimate their importance. They were, in fact, integral components of certain kinds of thought. And so, if thinking extended outside the brain, then the mind did, too.

What mattered for the merging of self and world was the incorporation of a thing into cognition, not into a body. But he was fascinated by Kevin Warwick, a professor in the Department of Cybernetics at the University of Reading, who had acquired the nickname Captain Cyborg. Warwick had implanted a silicon chip in his left arm which emitted radio signals that caused doors in his office to open and close and lights and heaters to switch on and off as he moved around. It felt to Warwick that he had become one with his small world, part of a harmoniously synchronized larger system, and the feeling was so pleasant that when it came time to remove the implant he found it hard to let go.

The Mind-Expanding Ideas of Andy Clark

obsessed by the heritage transmission of knowledge to new generations, we alienate them into rigid frameworks of thought that prevent them from innovating and give them no choice but to reproduce the patterns they already know. "The transmission of knowledge across the generations seems to be an obvious thing,"regrets Riel Miller, and yet it does not go without saying: there is nothing more presumptuous than pretending to teach the younger generations what they need to know, because what do we finally know about what the future holds for them, and therefore the knowledge they will need?

Do not say more prospective, say "future literacy"

If you were around in the 1980s and 1990s, you may remember the now-extinct phenomenon of “computerphobia”. I have personally witnessed it a few times as late as the early 2000s — as personal computers were introduced into our lives, in our workplaces and homes, quite a few people would react with anxiety, fear, or even aggressivity. While some of us were fascinated by computers and awestruck by the potential they could glimpse in them, most people didn’t understand them. They felt alien, abstruse, and in many ways, threatening. People feared getting replaced by technology.

Most of us react to technological shifts with unease at best, panic at worst. Maybe that is true of any change at all. But remarkably, most of what we worry about ends up never happening.

What worries me about AI

As the science-fiction writer Cory Doctorow put it in 2014: ‘There is nothing weird about a company doing this – commissioning a story about people using a technology to decide if the technology is worth following through on. It’s like an architect creating a virtual fly-through of a building.’ Generating future scenarios both factual and fictional is increasingly part of the world of future energy management. In this process, the boundary between fictional and factual worlds of energy becomes fluid in a number of interesting ways. Just as it did for Victorian speculators.

Fuelling the future

This is an important breakthrough in the world of Big Massive Data.
The team combined glass with an organic material, halving its lifespan but radically increasing capacity.
To create the nanoplasmonic hybrid glass matrix, gold nanorods were incorporated into a hybrid glass composite, known as organic modified ceramic.
The researchers chose gold because like glass, it is robust and highly durable. Gold nanoparticles allow information to be recorded in five dimensions – the three dimensions in space plus colour and polarisation.

Golden touch: next-gen optical disk to solve data storage challenge

Now scientists from RMIT University and Wuhan Institute of Technology, China, have used gold nanomaterials to demonstrate a next-generation optical disk with up to 10TB capacity - a storage leap of 400 per cent - and a six-century lifespan.
The technology could offer a more cost-efficient and sustainable solution to the global data storage problem while enabling the critical pivot from Big Data to Long Data, opening up new realms of scientific discovery.

The recent explosion of Big Data and cloud storage has led to a parallel explosion in power-hungry data centres.
These centres not only use up colossal amounts of energy - consuming about 3 per cent of the world’s electricity supply - but largely rely on hard disk drives that have limited capacity (up to 2TB per disk) and life-spans (up to two years).

The technology could radically improve the energy efficiency of data centres - using 1000 times less power than a hard disk centre - by requiring far less cooling and doing away with the energy-intensive task of data migration every two years. Optical disks are also inherently far more secure than hard disks.

This may not be your next laptop but for AI, cloud application and Blockchain processor - this is more of Moore’s Law - today’s corporate mainframe-cloud-rack and tomorrows home cloud.
It’s $399K for the world’s most powerful computer. This replaces $3M of 300 dual-CPU servers consuming 180 kilowatts. This is 1/8th the cost, 1/60th of the space, 18th the power.
In less than a decade, the computing power of GPUs has grown 20x — representing growth of 1.7x per year, far outstripping Moore’s law.
The software delivers up to 190x faster deep learning inference than CPUs for common applications such as computer vision, neural machine translation, automatic speech recognition, speech synthesis and recommendation systems.

Nvidia DGX-2 is 2 petaflop AI supercomputer for $399,000

NVIDIA launched NVIDIA DGX-2, the first single server capable of delivering two petaflops of computational power.A DGX-2 has the deep learning processing power of 300 servers occupying 15 racks of datacenter space, while being 60x smaller and 18x more power efficient.

NVIDIA DGX-2 is a server rack with 16 Volta GPUs and Dual Xeon Platinums for $399,000. It packs a total of 81,920 CUDA Cores with 512 GB HBM2 memory and a 14.4 TB/s aggregate bandwidth and 300 GB/s GPU to GPU. The total power consumption of the rack is 10,000 watts and weighs 350 pounds.

Alexnet five years ago took six days to train with 2 GTX 580s. That can now be done in 18 minutes on DGX-2.

Henry Mintzberg is a national treasure - a pragmatic observer of the human systems and their management. This is a good post on both our medical systems and our management frameworks. This is not a long read and well worth it.

The Great Strength and Debilitating Weakness of Modern Medicine… and Management

consider how professional work tends to be organized. Much of it is rather standardized, carried out by highly-trained people with a good deal of individual autonomy—at least from their colleagues, if not from the professional associations that set their standards. Just as the musicians of a symphony orchestra play in harmony while each plays to the notes written for his or her instrument, so too can a surgeon and anesthetist spend hours in an operating room without exchanging a single word. By virtue of their training, each knows exactly what to expect of the other.

Accordingly, much of modern medicine does not solve problems in an open-ended way so much as categorize patients’ conditions in a restricted way. Each is slotted into an established category of disease—a process known as diagnosis—to which an established, ideally evidence-based treatment—referred to as a set of protocols—can be applied.

The predetermined standards—those protocols—are tailored to the condition in question. The patient presents with a pain in the chest; the diagnosis indicates a blocked artery; a particular stent is installed in a particular place; and an administrative box is ticked so that a standard payment can be made.
The great strength of modern medicine lies in the fits that work. The patient enters the hospital with a diseased heart and leaves soon after with a repaired one. But where the fit fails can be found modern medicine’s debilitating weakness. Fits fail, more often than generally realized, beyond the categories, across the categories, and beneath the categories.

This is a fascinating website signalling vital weather changes in the ‘climate of social trust’. There are downloadable reports, videos, graphs and more. A must see for anyone interested in the social capital of Trust. That said - I’m not sure of the impartiality of the organization.

The 2018 Edelman TRUST BAROMETER

The 2018 Edelman TRUST BAROMETER reveals a world of seemingly stagnant distrust. People’s trust in business, government, NGOs and media remained largely unchanged from 2017 — 20 of 28 markets surveyed now lie in distruster territory, up one from last year. Yet dramatic shifts are taking place at the market level and within the institution of media.

The Polarization of Trust
The world is moving apart in trust. In previous years, market-level trust has moved largely in lockstep, but for the first time ever there is now a distinct split between extreme trust gainers and losers. No market saw steeper declines than the United States, with a 37-point aggregate drop in trust across all institutions. At the opposite end of the spectrum, China experienced a 27-point gain, more than any other market.

In Search of Truth
Globally, nearly seven in 10 respondents among the general population worry about fake news or false information being used as a weapon, and 59 percent say that it is getting harder to tell if a piece of news was produced by a respected media organization.
- In this environment, media has become the least-trusted institution for the first time in Trust Barometer history — yet, at the same time, the credibility of journalists rose substantially. A number of factors are driving this paradox.
- Confusion about the credibility of news is connected to the broad, wide definition of media that Trust Barometer respondents now hold. Some people consider platforms to be part of “the media” — including social media (48 percent) and search engines (25 percent) — alongside journalism (89 percent), which includes publishers and news organizations.

This year, trust in journalism jumped five points while trust in platforms dipped two points. In addition, the credibility of “a person like yourself” — often a source of news and information on social media — dipped to an all-time low in the study’s history. Most likely, the falloff of trust in social and search, and of the credibility of peer communication, are contributing to the overall decline of trust in media.

Speaking of Trust - this is an important comment on the current state of the social media Crisis - a tipping point perhaps towards developing legislative democratic protections for a more transparent and accountable digital environment.
In 2016, the European Union passed the comprehensive General Data Protection Regulation, or GDPR. The details of the law are far too complex to explain here, but some of the things it mandates are that personal data of EU citizens can only be collected and saved for "specific, explicit, and legitimate purposes," and only with explicit consent of the user. Consent can't be buried in the terms and conditions, nor can it be assumed unless the user opts in. This law will take effect in May, and companies worldwide are bracing for its enforcement.

Facebook and Cambridge Analytica

But for every article about Facebook's creepy stalker behavior, thousands of other companies are breathing a collective sigh of relief that it's Facebook and not them in the spotlight. Because while Facebook is one of the biggest players in this space, there are thousands of other companies that spy on and manipulate us for profit.

Harvard Business School professor Shoshana Zuboff calls it "surveillance capitalism." And as creepy as Facebook is turning out to be, the entire industry is far creepier. It has existed in secret far too long, and it's up to lawmakers to force these companies into the public spotlight, where we can all decide if this is how we want society to operate and -- if not -- what to do about it.

There are 2,500 to 4,000 data brokers in the United States whose business is buying and selling our personal data. Last year, Equifax was in the news when hackers stole personal information on 150 million people, including Social Security numbers, birth dates, addresses, and driver's license numbers.

The first step to any regulation is transparency. Who has our data? Is it accurate? What are they doing with it? Who are they selling it to? How are they securing it? Can we delete it? I don't see any hope of Congress passing a GDPR-like data protection law anytime soon, but it's not too far-fetched to demand laws requiring these companies to be more transparent in what they're doing.

This is a very good 20 discussion of the 20th century framework of social convention based on broadcast media - one that highlights the shift that social media is challenging in terms of how we coordinate our social conventions. This is well worth the careful attention.

Understanding the Blue Church

What is the nature of the social control structure that emerged in the 20th Century, drove the expansion of human society and is now in the midst of falling apart (and being replaced by  . . . ?)

The is an interesting 4 min video introducing a new form of co-living. What is interesting is that this could be a model for new forms of retirement communities, or communities for a range of people with special needs or a mix of communities.

Discover The Collective Old Oak

A look inside life at The Collective Old Oak, the world's largest co-living community in London.

Is Broadcast media dead? Maybe it will be reborn.

First IP-Based Standard for Digital TV Could Change the Face of Broadcasting

ATSC 3.0 promises immersive audio, interactivity, and hyperlocal emergency alerts
Instead of rushing home to catch the finale of your favorite cable TV show, you soon might be able to tune in on your laptop, tablet, or smartphone. Thanks to a new suite of standards, broadcasters will be able to provide customers with access to live programming anytime and anywhere, and on multiple devices. They will also offer interactivity as well as better sound and picture quality.

These are some of the features made possible by ATSC 3.0, a suite of standards for digital terrestrial broadcasting, authorized for the United States in November by the Federal Communications Commission. The suite incorporates the first IP-based broadcast standard, allowing broadcasting companies to simultaneously transmit content over the airwaves and the Internet.

ATSC stands for Advanced Television Systems Committee, a nonprofit in Washington, D.C., that develops voluntary technical standards for digital television.

This is a good summary of the economic and security inevitability of distributed energy paradigms enabled by renewable energy technology.

The Death Of Traditional Power Grids

The problem with centralized power grids is that they can be crippled at just one point of failure, leaving consumers vulnerable to outages. According to Mark Feasel of Schneider Electric, the cost of such outages for the U.S economy overall is $150 billion a year. An irritating inconvenience for domestic consumers, prolonged outages are expensive, damaging and potentially fatal to businesses of all scales. Insurance may not necessarily cover business that are forced to close due to power outages, just as it may not reimburse damage to property or stock. Given that the question of outages is likely to be when rather than if, it is no surprise that many businesses are looking to augment their power needs with backup systems. While for some that may simply be something like a backup generator, many more are utilising microgrids.

Put simply, a microgrid contains localised energy generation, distribution and in some cases, storage. Microgrids are generally used in discrete locations to provide all of the power needs of that site, but they also work in tandem with a centralized grid, augmenting or providing backup power to that supply.

The main benefits of microgrids are threefold; they are local, independent and intelligent. When energy is produced locally, the grid itself becomes more efficient. Delivering electricity form centralized grids leads to losses of between 8 and 15 percent. This locality also means that the site isn’t susceptible to power outages that affect the central grid. In such an event, the microgrid can take control of the delivery of power before there is any loss, eliminating blackouts and brownouts. The way it does this is by use of intelligent switching. A microgrid can monitor all aspects of the power system, and thereby intelligently switch between the local grid and the wider grid, depending on various factors. It can, for example, monitor price fluctuations and only draw from the main grid when prices are low, switching to local supply when they rise.

Here’s a good signal of the rapidly emerging phase transition if global energy geopolitics.
“With storage we can smooth out the intermittence of renewable energy and guarantee the balancing of power grids,” EDF chief executive Jean-Bernard Levy told reporters.

EDF to invest 8 billion euros in power storage business

French state-owned utility EDF plans to invest eight billion euros (7 billion pounds) between 2018 and 2035 to become a European market leader in electricity storage.

EDF, which already operates pumped storage hydropower plants and some utility-scale power storage batteries, said on Tuesday it aimed to become a French and European market leader, offering storage batteries for customers in the retail power sector.

Power storage usage by European utilities is growing as the intermittent supply from renewable energies like solar and wind forces them to have more power reserves on standby.

EDF’s investment in power storage will focus on boosting the resilience of power grids, on individual storage for retail customers with solar panels, and on off-grid solar plus storage systems in Africa, notably in Ghana and Ivory Coast.

A strong signal of the economic driver of the transition in energy geopolitics.

Solar Power Energy Payback Time Is Now Super Short

Some solar power critics seem to enjoy trying to point out that the energy payback time for solar power is too long, and therefore this form of renewable energy is not valid. Those critics have not kept up with the times or are simply lying to you.

Years ago, when solar cells were less efficient, there might have been some truth in questioning the energy payback of solar panels because they were most likely manufactured using electricity generated from coal, natural gas, or nuclear power and were less efficiently manufactured.

Today’s solar panels are more efficient, so they produce more electricity, and this fact along with more efficient manufacturing means that energy payback periods have decreased to just a few years. Research has found, “Energy payback estimates for rooftop PV systems are 4, 3, 2, and 1 years: 4 years for systems using current multicrystalline-silicon PV modules, 3 years for current thin-film modules, 2 years for anticipated multicrystalline modules, and 1 year for anticipated thin-film modules (see Figure 1). With energy paybacks of 1 to 4 years and assumed life expectancies of 30 years, 87% to 97% of the energy that PV systems generate won’t be plagued by pollution, greenhouse gases, and depletion of resources.”

Other estimates also show solar is viable and have tremendous energy payback periods. “In Australia, the International Energy Agency calculated the energy payback period for a solar power system to be under two years. This means a solar power system takes less than two years to generate enough energy to break even on the amount of energy taken to manufacture it.

And we can expect that this initiative or others very similar to it will be rolling out in the next few years. If the news is positive and includes the continuing decline in prices for panels and storage - this can provide a significant motivation to accelerate the shift into renewable energy. Sometimes things move slowly until they move very fast.

Google to map potential of UK domestic solar energy generation

Google is planning to use satellite imagery to map the “solar potential” of Britain’s rooftops as part of a push into the renewable energy industry.

The data could be used to encourage British households to install solar ­panels on their roofs to help cut energy bills.

Using imagery from the Google Maps and Google Earth applications, the tech giant will calculate the total amount of sunlight that falls on a ­rooftop every year by using weather data, the position of the sun across the seasons, the size and pitch of the roof, as well as any shadows from surrounding buildings or trees.

The project also uses machine learning to create a computer tool that can assess the “solar potential” of a rooftop in seconds.

Another signal in the transformation of energy geopolitics.

In a first, a new UK coal mine is rejected on climate change grounds

Coal mining is in decline on the island that helped make it big.
Britain's Communities Secretary rejected plans for a new Northumberland open-pit coal mine, citing climate change concerns. The rejection was the first of its kind to rest on climate change concerns in the United Kingdom. The country has committed to phasing out coal use at power plants by 2025.

Member of Parliament Sajid Javid overturned the recommendation of the UK planning inspector as well as the Northumberland county council's approval last week, according to the Financial Times. The project would have been completed before 2025 and would have employed about 100 people. Still, the mine would have been situated near Druridge Bay and would have destroyed landscape and heritage assets, in addition to contributing to climate change.

Here’s a signal to watch - one that could mitigate our current condition regarding climate change.

Scientists say we’re on the cusp of a carbon dioxide–recycling revolution

Every year, the billions of metric tons of carbon dioxide (CO2) we release into the atmosphere add to the growing threat of climate change. But what if we could simply recycle all that wasted CO2and turn it into something useful?

By adding electricity, water, and a variety of catalysts, scientists can reduce CO2 into short molecules such as carbon monoxide and methane, which they can then combine to form more complex hydrocarbon fuels like butane. Now, researchers think we could be on the cusp of a CO2-recycling revolution, which would capture CO2 from power plants—and maybe even directly from the atmosphere—and convert it into these fuels at scale, they report today in Joule.

Science talked with one of the study’s authors, materials scientists and graduate student Phil De Luna at the University of Toronto in Canada, about how CO2 recycling works—and what the future holds for these technologies. This interview has been edited for clarity and length.

This is an interesting signal of the future of some forms of housing. There is a 2 min video.

The Vulcan 3D Printer Can Create A House In Just 48 Hours

Some amazing new technology unveiled at SXSW this year could represent a quantum leap for homebuilding in the developing world.

Jason Ballard, the co-founder and president of Austin’s green home improvement store, TreeHouse, teamed up with NewStory to create a new a construction technologies company called Icon. The collaborative effort resulted in the development of the Vulcan, a 3D printer that creates concrete at a fraction of the cost of traditional building methods.

The Vulcan can print the layout of an up-to-code 350 square-foot home in just about 24 hours. After that, the conventionally made finishing touches like a roof, windows, doors, paint, and electrical are added to the frame.
This process can create a finished home in just two days.

The goal of Icon and New Story is to be able to make 600-800 square-foot homes in less than a day at the cost of $4,000 per home. Once that’s accomplished, the companies will begin construction on an entire community of 3D-printed homes in El Salvador.
Icon and New Story hope to reach this goal in about 18 months.

This is an important signal of the potential to solve the problem of providing everyone in the world with inexpensive drinking water. The five examples should be inspiring to anyone interested in water innovation.

The machines harvesting fresh water from thin air

Fresh water isn't available for millions of people. Turning capturing water from the air may help to provide water for all
Can you pluck fresh, clean drinking water out of thin air? The Water Abundance XPRIZE has shortlisted five companies that think they can.

After kicking off with the literal moonshot prize back in 2004, the XPRIZE foundation tries to solve global problems that industry sees no profit in. This year’s clean water challenge aims to supply the 2.1 billion people who currently lack safe drinkable water with a device that can extract “a minimum of 2,000 litres of water per day from the atmosphere using 100 per cent renewable energy, at a cost of no more than 2 cents per litre.”

The overall goal is to replace costly desalination plants that produce CO2 and pump brine back into the seas – damaging the climate and the ecosystem. All equipment – including maintenance for 10 full years – has to cost less than $146,000.

The first round of testing took place in January with round two looming in July. The winner takes home is $1 million but each shortlisted team wins $250,000. These are the companies trying to win the prize.

Everyday there are new articles about the looming transformation of employment by automation, AI and 3D printing. This is another signal in the same stream - but it provides an interesting perspective.
It has been suggested that poorer countries will not as be affected by automation because they have less money to invest in it.
"Our research shows that this is overly optimistic. Currently the cost of operating robots in furniture manufacturing is still higher than labour, but this will not be the case within 15 years", Dirk Willem te Velde, director of the Supporting Economic Transformation programme at ODI, said in a statement.

Robots and automation: How Africa is at risk

Within less than two decades it will be cheaper to operate robots in US factories than hire workers in Africa, a new report warns.
Falling automation costs are predicted to cause job losses as manufacturers return to richer economies.
Some analysts say poorer countries could be less impacted by this trend, however the Overseas Development Institute (ODI) suggests otherwise.

But its report adds African nations have time to prepare for the change.
ODI's report, Digitalisation and the Future of Manufacturing in Africa, found that in furniture manufacturing, the cost of operating robots and 3D printers in the US will be cheaper than Kenyan wages by 2034.
In Ethiopia, ODI predicts robotic automation will be cheaper than Ethiopian workers between 2038 and 2042.

This may well be an important breakthrough for understanding both human learning and AI. The illustration are very useful in understanding.

The brain learns completely differently than we’ve assumed, new learning theory says

New post-Hebb brain-learning model may lead to new brain treatments and breakthroughs in faster deep learning
A revolutionary new theory contradicts a fundamental assumption in neuroscience about how the brain learns. According to researchers at Bar-Ilan University in Israel led by Prof. Ido Kanter, the theory promises to transform our understanding of brain dysfunction and may lead to advanced, faster, deep-learning algorithms.

In 1949, Donald Hebb suggested that learning occurs in the brain by modifying the strength of synapses. Hebb’s theory has remained a deeply rooted assumption in neuroscience.

Hebb was wrong, says Kanter. “A new type of experiments strongly indicates that a faster and enhanced learning process occurs in the neuronal dendrites, similarly to what is currently attributed to the synapse,” Kanter and his team suggest in an open-access paper in Nature’s Scientific Reports, published Mar. 23, 2018.

“In this new [faster] dendritic learning process, there are [only] a few adaptive parameters per neuron, in comparison to thousands of tiny and sensitive ones in the synaptic learning scenario,” says Kanter. “Does it make sense to measure the quality of air we breathe via many tiny, distant satellite sensors at the elevation of a skyscraper, or by using one or several sensors in close proximity to the nose,?” he asks. “Similarly, it is more efficient for the neuron to estimate its incoming signals close to its computational unit, the neuron.”

This is a good signal of increasing progress in our understanding of the brain and neural cognitive capacities.
problems have been solved in the new scanner by scaling down the technology and taking advantage of new 'quantum' sensors that can be mounted in a 3D-printed prototype helmet.

New wearable brain scanner allows patients to move freely for the first time

A new generation of brain scanner, that can be worn like a helmet allowing patients to move naturally whilst being scanned, has been developed by researchers at the Sir Peter Mansfield Imaging Centre, University of Nottingham and the Wellcome Centre for Human Neuroimaging, UCL. It is part of a five-year Wellcome funded project which has the potential to revolutionise the world of human brain imaging.

In a Nature paper published today (21 March), the researchers demonstrate that they can measure brain activity while people make natural movements, including nodding, stretching, drinking tea and even playing ping pong. Not only can this new, light-weight, magnetoencephalography (MEG) system be worn, but it is also more sensitive than currently available systems.

The researchers hope this new scanner will improve research and treatment for patients who can't use traditional fixed MEG scanners, such as young children with epilepsy or patients with neurodegenerative disorders like Parkinson's disease.

The quest for new forms of antibacterial agents has found some success with a promise of more.
“Environmental microbes are in a continuous antibiotic arms race that is likely to select for antibiotic variants capable of circumventing existing resistance mechanisms,”

New type of antibiotic discovered in soil in breakthrough for fight against drug-resistant superbugs

After years of drought in drug discovery, scientists hail good news in 'antibiotic arms race'
Scientists trawling through thousands of soil samples have discovered a whole new class of antibiotics capable of killing drug-resistant bacteria.
The chemical, which has been called malacidin, appears to be non-toxic in humans and effective in tackling the hospital superbug MRSA, raising hopes that it could be used to develop a new treatment.

Scientists say antimicrobial resistance - the ability of diseases to fight back against and ultimately become immune to known treatments - represents one of the greatest threats to humanity.
Finding new antibiotics is key to staying one step ahead of bacteria threats, but only one new class - teixobactin - has been discovered in the last 33 years.

In their study, published as a letter in the journal Nature Microbiology, the authors cautioned that it is only effective against one group of bacteria - the gram positives, which include MRSA.
Nonetheless, they wrote that the breakthrough suggests there may be more similar compounds like malacidin to discover.

No comments:

Post a Comment