Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.) that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.
Many thanks to those who enjoy this. ☺
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - Work is just beginning.
Work that engages our whole self becomes play that works.
Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How
In the 21st century - the planet is the little school house in the galaxy.
Citizenship is the battlefield of the 21st Century
“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9
Content
Quotes:
Articles:
In these ‘big picture’ studies, scientists stretch beyond their areas of expertise to try to answer the question of what it means to be human. Psychologists become physiologists. Biologists become psychologists. Neuroscientists become anthropologists. And everyone’s a philosopher.
Survival: the first 3.8 billion years
Lisa Feldman Barrett ponders Joseph LeDoux’s study on how conscious brains evolved.
This is the great mystery of human vision: Vivid pictures of the world appear before our mind’s eye, yet the brain’s visual system receives very little information from the world itself. Much of what we “see” we conjure in our heads.
“A lot of the things you think you see you’re actually making up,” said Lai-Sang Young, a mathematician at New York University. “You don’t actually see them.”
Yet the brain must be doing a pretty good job of inventing the visual world, since we don’t routinely bump into doors. Unfortunately, studying anatomy alone doesn’t reveal how the brain makes these images up any more than staring at a car engine would allow you to decipher the laws of thermodynamics.
there’s very little connectivity between the retina and the visual cortex. For a visual area roughly one-quarter the size of a full moon, there are only about 10 nerve cells connecting the retina to the visual cortex. These cells make up the LGN, or lateral geniculate nucleus, the only pathway through which visual information travels from the outside world into the brain.
“You may think of the brain as taking a photograph of what you see in your visual field,” Young said. “But the brain doesn’t take a picture, the retina does, and the information passed from the retina to the visual cortex is sparse.”
But then the visual cortex goes to work. While the cortex and the retina are connected by relatively few neurons, the cortex itself is dense with nerve cells. For every 10 LGN neurons that snake back from the retina, there are 4,000 neurons in just the initial “input layer” of the visual cortex — and many more in the rest of it. This discrepancy suggests that the brain heavily processes the little visual data it does receive.
“The visual cortex has a mind of its own,”
A Mathematical Model Unlocks the Secrets of Vision
They all appeared in the brilliant and innovative “To-Day and To-Morrow” books from the 1920s, which signal the beginning of our modern conception of futurology, in which prophecy gives way to scientific forecasting. This series of over 100 books provided humanity – and science fiction – with key insights and inspiration. I’ve been immersed in them for the last few years while writing the first book about these fascinating works – and have found that these pioneering futurologists have a lot to teach us.
In their early responses to the technologies emerging then – aircraft, radio, recording, robotics, television – the writers grasped how those innovations were changing our sense of who we are. And they often gave startlingly canny previews of what was coming next, as in the case of Archibald Low, who in his 1924 book Wireless Possibilities, predicted the mobile phone: “In a few years time we shall be able to chat to our friends in an aeroplane and in the streets with the help of a pocket wireless set.”
Futurology: how a group of visionaries looked beyond the possible a century ago and predicted today’s world
In Einstein’s autobiographical writing from 1949, he expands on how Hume helped him formulate the theory of special relativity. It was necessary to reject the erroneous ‘axiom of the absolute character of time, viz, simultaneity’, since the assumption of absolute simultaneity
- unrecognisedly was anchored in the unconscious. Clearly to recognise this axiom and its arbitrary character really implies already the solution of the problem. The type of critical reasoning required for the discovery of this central point [the denial of absolute time, that is, the denial of absolute simultaneity] was decisively furthered, in my case, especially by the reading of David Hume’s and Ernst Mach’s philosophical writings.
No absolute time
we think that all mammals share seven foundational affective systems: FEAR, LUST, CARE, PLAY, RAGE, SEEKING and PANIC/GRIEF (Panksepp capitalises the names to indicate that these specific affects are physiological/behavioural systems as well as correlated subjectively experienced emotions. But the psychological and physiological aspects are not separable in a dualistic or even epiphenomenal way). Each of these has specific neural electrochemical pathways, with accompanying feeling states and behaviour patterns. …. These seven emotions are universal in humans (and mammals), but they are filtered through the three layers of mind, creating tremendous diversity.
The impressive achievements of the human cognitive niche are often heralded, while the emotional niche has gone unsung. The advances of complex tool industry, for example, or the evolution of our distinctive human family structures, could never have happened without parallel advances in the emotional life of Homo sapiens. Humans would not be such masterful cooperators, especially in non-kin social groups, if they did not undergo some significant emotional domestication that sculpted our motivations and desires towards prosocial coexistence.
United by feelings
Cory Doctorow - A founder of the Electronic Frontier Foundation, a Canadian, and best selling science fiction writer discusses with the host of the Modern Monetary Theory podcast the potential of new business models and a new economic paradigm - This is a MUST HEAR podcast for anyone interested in the digital economy.
The MMT Podcast with Patricia Pino & Christian Reilly
#26 & #27 - Cory Doctorow: Radicalize This! (part 1 & part 2)
Christian talks to author and activist Cory Doctorow about his recent writing on MMT, his book “Radicalized”, and how storytelling can bring esoteric concepts to life.
This is a great signal of not just a new research domain - but the emergence of AI and complexity anthropology. I wish I could have become an Digital Anthropologist.
The Anthropologist of Artificial Intelligence
The algorithms that underlie much of the modern world have grown so complex that we always can’t predict what they’ll do. Iyad Rahwan’s radical idea: The best way to understand them is to observe their behavior in the wild.
Naming this emerging field also legitimizes it. If you’re an economist or a psychologist, you’re a serious scientist studying the complex behavior of people and their agglomerations. But people might consider it less important to study machines in those systems as well.
So when we brought together this group and coined this term “machine behavior,” we’re basically telling the world that machines are now important actors in the world. Maybe they don’t have free will or any legal rights that we ascribe to humans, but they are nonetheless actors that impact the world in ways that we need to understand. And when people of high stature in those fields sign up [as co-authors] to this paper, that sends a very strong signal.
Another signal of the darker shadow of monopoly platforms in the digital environment.
Even though government statistics show that crime in the United States has been steadily decreasing for decades, people’s perception of crime and danger in their communities often conflict with the data. Vendors prey on these fears by creating products that inflame our greatest anxieties about crime.
Amazon’s Ring Is a Perfect Storm of Privacy Threats
Doors across the United States are now fitted with Amazon’s Ring, a combination doorbell-security camera that records and transmits video straight to users’ phones, to Amazon’s cloud—and often to the local police department. By sending photos and alerts every time the camera detects motion or someone rings the doorbell, the app can create an illusion of a household under siege. It turns what seems like a perfectly safe neighborhood into a source of anxiety and fear. This raises the question: do you really need Ring, or have Amazon and the police misled you into thinking that you do?
Recent reports show that Ring has partnered with police departments across the country to hawk this new surveillance system—going so far as to draft press statements and social media posts for police to promote Ring cameras. This creates a vicious cycle in which police promote the adoption of Ring, Ring terrifies people into thinking their homes are in danger, and then Amazon sells more cameras.
This is a very hopeful signal highly relevant to knowledge management and the accelerating increase in specialized knowledge. There is too much to know and that leads many intellectuals to increasing ‘Discipline’ their minds into ever narrower domain of interest. Cultural languages may be disappearing but technical and scientific languages are increasing. We need new tools to harness the power of proliferating fields of interest.
"Our project is motivated by the need of improving the readability of journal articles," Weijia Xu, who lead the team at TACC, told TechXplore. "It is a joint effort between biological curators, journal publishers and computer scientists aimed at developing a web service that can recognize and enable author curation of important terminology used in journal publications. The terminology and words are then attached to the end of the journal article in order to increase its accessibility for readers."
A web application to extract key information from journal articles
Academic papers often contain accounts of new breakthroughs and interesting theories related to a variety of fields. However, most of these articles are written using jargon and technical language that can only be understood by readers who are familiar with that particular area of study.
Non-expert readers are thus typically unable to understand scientific articles, unless they are curated and made more accessible by third parties who understand the concepts and ideas contained within them. With this in mind, a team of researchers at the Texas Advanced Computing Center in the University of Texas at Austin (TACC), Oregon State University (OSU) and the American Society of Plant Biologists (ASPB) have set out to develop a tool that can automatically extract important phrases and terminology from research papers in order to provide useful definitions and enhance their readability.
Xu and his colleagues developed an extensible framework that can be used to extract information from documents. They then implemented this framework within a web service called DIVE (Domain Information Vocabulary Extraction), integrating it with the journal publication pipeline of the ASPB. Unlike existing tools for extracting domain information, their framework combines several approaches, including ontology-guided extraction, rule-based extraction, natural language processing (NLP) and deep learning techniques.
This is a very important signal about the value of information as an ‘anti-rival’ good that when openly shared exponentially increases in value.
Waymo is making some of its self-driving car data available for free to researchers
12 million 3D labels, 1,000 driving segments, and a partridge in a pear tree
The data collected by self-driving cars used to be a closely guarded secret. But recently, many companies developing autonomous driving systems have begun to release their data to the research community in dribs and drabs. The latest to do so is Waymo, the self-driving unit of Alphabet, which today is making some of the high-resolution sensor data gathered by its fleet of autonomous vehicles available to researchers.
Waymo says its dataset contains 1,000 driving segments, with each segment capturing 20 seconds of continuous driving. Those 20-second clips correspond to 200,000 frames at 10 Hz per sensor, which will allow researchers to develop their own models to track and predict the behavior of everyone using the road, from drivers to pedestrians to cyclists.
The data was collected by Waymo’s fleet from four cities: San Francisco, Mountain View, Phoenix, and Kirkland, Washington. It includes images captured by each vehicle’s sensors, which includes LIDAR, cameras, and radar. Those images with vehicles, pedestrians, cyclists, and signage have been carefully labeled, presenting a total of 12 million 3D labels and 1.2 million 2D labels.
This is a great signal of self-driving capabilities entering a new stage of maturity - and given the initial market - will likely contribute to a new form of ‘generation gap’.
Thousands of autonomous delivery robots are about to descend on US college campuses
Starship Technologies announces an expansion of its robot delivery service after raising $40 million
The quintessential college experience of getting pizza delivered to your dorm room is about to get a high-tech upgrade. On Tuesday, Starship Technologies announced its plan to deploy thousands of its autonomous six-wheeled delivery robots on college campuses around the country over the next two years, after raising $40 million in Series A funding.
It’s a big step for the San Francisco (née Estonia)-based startup and its robots, which have been tested in over 100 cities in 20 different countries, traveled 350,000 miles, crossed 4 million streets, and just marked the milestone of completing its 100,000th delivery. College campuses, with their abundance of walking paths, well-defined boundaries, and smartphone-using, delivery-minded student bodies, are an obvious place for Starship to stake out the next phase of its business.
Another milestone in the ongoing adventures of computational paradigms.
6 Things to Know About the Biggest Chip Ever Built
Startup Cerebras has built a wafer-size chip for AI, but it isn’t the only one possible
Stanford University, startup Cerebras unveiled the largest chip ever built. It is roughly a silicon wafer-size system meant to reduce AI training time from months to minutes. It is the first commercial attempt at a wafer-scale processor since Trilogy Systems failed at the task in the 1980s.
Size: 46,225 square millimeters. That’s about 75 percent of a sheet of letter-size paper, but 56 times as large as the biggest GPU. Transistors: 1.2 trillion. Nvidia’s GV100 Volta packs in 21 billion.
Processor cores: 400,000. Memory: 18 gigabytes of on-chip SRAM. Cerebras says this is 3,000 times as much as the GPU. Memory bandwidth: 9 petabytes per second. According to Cerebras, that’s 10,000 times our favorite GPU,
the demand for training deep learning systems and other AI systems is getting out of hand. The company says that training a new model—creating a system that, once trained, can recognize people or win a game of Go—is taking weeks or months and costing hundreds of thousands of dollars of compute time. That cost means there’s little room for experimentation, and that’s stifling new ideas and innovation.
The startup’s answer is that the world needs more, and cheaper, training compute resources. Training needs to take minutes not months, and to do that you need more cores, more memory close to those cores, and a low-latency, high-bandwidth connection between the cores.
Another signal confirming the speed of renewable energy’s displacement of traditional energy sources.
Solar Power Is Now as Inexpensive as Grid Electricity in China
Researchers found that PV systems could produce electricity at a lower price than the grid in 344 cities
Solar power now costs the same as, or less than, electricity from the grid in many of China’s cities, a new study finds. This research may encourage broader adoption of industrial and commercial solar power there.
China is now the world's largest producer of electricity. Most of this electricity comes from coal, which was used to generate more than 72 percent of China's electricity in 2015. Still, China is aggressively pursuing renewable energy, with the U.S. Energy Information Administration projecting China's solar capacity to grow by more than 7 percent per year from 2015 to 2040, and its wind capacity to grow at nearly 5 percent annually during that period.
Previous research suggested that solar energy could reach grid parity—that is, become as or less expensive than coal and more conventional sources of electricity—in most developed countries between 2013 and 2020. In contrast, prior work suggested it might take China decades before solar energy achieved grid parity.
And a different signal for the same thing.
Wind power prices now lower than the cost of natural gas
In the US, it's cheaper to build and operate wind farms than buy fossil fuels.
This week, the US Department of Energy released a report that looks back on the state of wind power in the US by running the numbers on 2018. The analysis shows that wind hardware prices are dropping, even as new turbine designs are increasing the typical power generated by each turbine. As a result, recent wind farms have gotten so cheap that you can build and operate them for less than the expected cost of buying fuel for an equivalent natural gas plant.
Wind is even cheaper at the moment because of a tax credit given to renewable energy generation. But that credit is in the process of fading out, leading to long-term uncertainty in a power market where demand is generally stable or dropping.
2018 saw about 7.6 GigaWatts of new wind capacity added to the grid, accounting for just over 20% of the US' capacity additions. This puts it in third place behind natural gas and solar power. That's less impressive than it might sound, however, given that things like coal and nuclear are essentially at a standstill.
This is an important signal indicating that improvements in renewable energy will also come from innovations in electric motors - including many devices for the disabled.
Company studies of existing automotive platforms, including the Tesla and the Toyota Prius, show that the motors could increase driving range by more than 10 percent; or allow those cars to carry relatively smaller battery packs to deliver equivalent range
“You’ll see us in a micromobility application, with air-cooled motors, within one year,”
New Electric Motor Could Boost Efficiency of EVs, Scooters, and Wind Turbines
The Hunstable Electric Turbine by Linear Labs can generate two to five times the torque of existing motors in the same-size package
Makers of electric vehicles, e-bikes, or electric scooters—and the owners who love them—tend to focus on batteries, and how much better their vehicles become as batteries shrink in weight, size, and cost.
But electric motors are the often-overlooked aspect of that equation. Linear Labs says its electric machines could revolutionize automobiles, wind turbines, and air conditioners as well as robotics, drones, and micromobility vehicles.
The Ft. Worth, Texas–based company has invented what it calls the Hunstable Electric Turbine, or HET. The patented HET, the company claims, can generate two to five times the torque of existing motors or generators, in the same-size package. Torque is the amount of work that a motor or engine produces, typically measured on a per-revolution basis.
The company’s permanent-magnet tech requires no rare-earth metals. The motors generate such robust torque that, in most applications, no gearbox reduction is necessary. The system incorporates a purely electronic transmission, which reduces energy losses and, at production scale, could trim at least 45 kilograms (100 pounds) from vehicle weight. Complexity and costs for engineering and manufacturing drop along with it.
This is another signal indicating the power of automation and AI to transform not only work - but science methods. Enhancing human attention by creating cognitive surplus to focus on more valuable efforts.
The technology “has the promise to help people cut out all the tedious parts of molecule building,” including looking up potential reaction pathways and building the components of a molecular assembly line each time a new molecule is produced
Robotic Platform Powered by AI Automates Molecule Production
New system could free bench chemists from time-consuming tasks, may help inspire new molecules.
Guided by artificial intelligence and powered by a robotic platform, a system developed by MIT researchers moves a step closer to automating the production of small molecules that could be used in medicine, solar energy, and polymer chemistry.
The system, described in the August 8 issue of Science, could free up bench chemists from a variety of routine and time-consuming tasks, and may suggest possibilities for how to make new molecular compounds, according to the study co-leaders Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering, and Timothy F. Jamison, the Robert R. Taylor Professor of Chemistry and associate provost at MIT.
Another fascinating signal of advances in our domestication of DNA into domains of potential manufacturing.
“We’re in the CRISPR age right now,” Collins says. “It’s taken over biology and biotechnology. We’ve shown that it can make inroads into materials and bio-materials.”
CRISPR cuts turn gels into biological watchdogs
Wunderkind gene-editing tool used to trigger smart materials that can deliver drugs and sense biological signals.
Is there anything CRISPR can’t do? Scientists have wielded the gene-editing tool to make scores of genetically modified organisms, as well as to track animal development, detect diseases and control pests. Now, they have found yet another application for it: using CRISPR to create smart materials that change their form on command.
The shape-shifting materials could be used to deliver drugs, and to create sentinels for almost any biological signal, researchers report in Science on 22 August1. The study was led by James Collins, a bioengineer at the Massachusetts Institute of Technology in Cambridge.
Collins’ team worked with water-filled polymers that are held together by strands of DNA, known as DNA hydrogels. To alter the properties of these materials, Collins and his team turned to a form of CRISPR that uses a DNA-snipping enzyme called Cas12a. (The gene-editor CRISPR–Cas9 uses the Cas9 enzyme to snip a DNA sequence at the desired point.) The Cas12a enzyme can be programmed to recognize a specific DNA sequence. The enzyme cuts its target DNA strand, then severs single strands of DNA nearby.
This property allowed the researchers to build a series of CRISPR-controlled hydrogels containing a target DNA sequence and single strands of DNA, which break up after Cas12a recognizes the target sequence in a stimulus. The break-up of the single DNA strands triggers the hydrogels to change shape or, in some cases, completely dissolve, releasing a payload (see ‘CRISPR-controlled gels’).
This is a weak signal of a potential form of communication beyond the speed of light - it is also a signal of the development of our understanding and use of quantum scale reality.
Entangling photons generated millions of miles apart
A team of researchers with members from China, Germany, the U.K. and the U.S. has found a way to entangle photons generated millions of miles apart. In their paper published in the journal Physical Review Letters, the researchers describe this feat and how it might be used to study properties of the sun.
To find out if this was possible, the researchers set up a filter system for light arriving from the sun that would only admit photons that matched all the characteristics of one they generated locally on demand. But that still left a problem with timing—getting them to arrive at a splitter at the same time. To make this happen, the researchers directed the stream of photons from the first filter through yet another filter—one that filtered out those photons that were not arriving at the same rate as would the photon they generated themselves.
This is a good signal of the arms race of data predators and personal privacy seekers.
Winston filter promises to give people control over their online privacy
A hardware filter called Winston that users plug into their modem to protect their data has launched, in response to mounting concern over digital privacy and surveillance.
Created by US start-up Winston Privacy, the filter promises to prevent online tracking and profiling for every member of the household where it is installed, and across every device on the network.
This includes smartphones and smart-home products like connected fridges and speakers. It is effectively virtual private network, anti-spyware, anti-malware, firewall and ad-blocking software in one.
Users plug it in between their modem and router, and one minute after they turn it on, the device should be effective.
It works by routing the household's web traffic through 20 to 30 other Winston units that are randomly selected several times each hour. This makes it impossible to correlate individual users to their IP addresses. As well as the hardware filter, there is a software component to Winston that users access through an annual subscription.