Thursday, October 4, 2018

Friday Thinking 5 Oct 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning. Work that engages our whole self becomes play that works. Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:



Arguably, the 2008 financial crisis was different, because, as the theorist Geoffrey West writes, it was “stimulated by misconceived dynamics in the parochial and relatively localized US mortgage industry,” and thus exposed “the challenges of understanding complex adaptive systems.”

It is important to acknowledge such challenges. But an even more important point to note is that the profiles of the world’s largest companies today are very different from those of a decade ago. In 2008, PetroChina, ExxonMobil, General Electric, China Mobile, and the Industrial and Commercial Bank of China were among the firms with the highest market capitalization. In 2018, that status belongs to the so-called FAANG cluster: Facebook, Amazon, Apple, Netflix, and Alphabet (Google’s parent company).

Against this backdrop, it is no surprise that the US Federal Reserve’s annual symposium in Jackson Hole, Wyoming, last month focused on the dominance of digital platforms and the emergence of “winner-take-all” markets, not global debt. This newfound awareness reflects the fact that it is intangible assets like digital software, not physical manufactured goods, that are driving the new phase of global growth.

Bill Gates, the founder of Microsoft, recently explained this profound shift in a widely shared blog post. “The portion of the world’s economy that doesn’t fit the old model just keeps getting larger,” he writes. And this development “has major implications for everything from tax law to economic policy to which cities thrive and which cities fall behind.” The problem is that, “in general, the rules that govern the economy haven’t kept up. This is one of the biggest trends in the global economy that isn’t getting enough attention.”

Looking Beyond Lehman




The existence of simplicity in the description of underlying fundamental laws is not the only way that simplicity arises in science. The existence of multiple levels implies that simplicity can also be an emergent property. This means that the collective behavior of many elementary parts can behave simply on a much larger scale.

The study of complex systems focuses on understanding the relationship between simplicity and complexity. This requires both an understanding of the emergence of complex behavior from simple elements and laws, as well as the emergence of simplicity from simple or complex elements that allow a simple larger scale description to exist.

Whenever we are describing a simple macroscopic behavior, it is natural that the number of microscopic parameters relevant to model this behavior must be small. This follows directly from the simplicity of the macroscopic behavior. On the other hand, if we describe a complex macroscopic behavior, the number of microscopic parameters that are relevant must be large.

Yaneer Bar-Yam - Emergence of simplicity and complexity




On Bad News and Good News:
The bad news is cyberwar. It’s looking extremely powerful. It doesn’t have any rules yet. And it will only get rules through some pretty wretched excesses and disasters. And it’s going to take the world pretty much understanding and acting as one—which has never happened before. But I’m hopeful. Kevin Kelly’s line is that it’s pretty obvious we’re going to have to have global governance. That is what it will take to develop the rules of cyberwarfare.

And here’s my hopeful version: Climate change is forcing humanity to act as one to solve a problem that we created. It’s not like the Cold War. Climate change is like a civilizational fever. And we’ve got to find various ways to understand the fever and cure it in aggregate.

All this suggests that this century will be one where a kind of planetary civilization wakes up and discovers itself, that we are as gods and we have to get good at it.

Stewart Brand and the Tools That Will Make the Whole Earth





This is a good signal of the exploration of the possibility space for the emergence of new institutions related to the digital environment. While a ‘Auditor General of Algorithms’ isn’t mentioned - this list makes such an institution more plausible.

12 Organizations Saving Humanity from the Dark Side of AI

Artificial Intelligence (AI) can help do many incredible things, from detecting cancer to driving our cars but it also raises many questions and poses new challenges. Stephen Hawking, the renowned physicist once told the BBC, “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Ensuring that this transition to the age of AI remains beneficial for humanity remains one of the greatest challenges of our time and here are 12 organizations working on saving humanity from the dark side of AI.


This is another vital institution -that was a gift to the world and that is threatened with an ‘enclosure movement’ by privateers - Let’s hope this new gift is more resistant to capture by rent-seekers.

Tim Berners-Lee tells us his radical new plan to upend the World Wide Web

With an ambitious decentralized platform, the father of the web hopes it’s game on for corporate tech giants like Facebook and Google.
This week, Berners-Lee will launch Inrupt a startup that he has been building, in stealth mode, for the past nine months. Backed by Glasswing Ventures, its mission is to turbocharge a broader movement afoot, among developers around the world, to decentralize the web and take back power from the forces that have profited from centralizing it. In other words, it’s game on for Facebook, Google, Amazon. For years now, Berners-Lee and other internet activists have been dreaming of a digital utopia where individuals control their own data and the internet remains free and open. But for Berners-Lee, the time for dreaming is over.

“We have to do it now,” he says, displaying an intensity and urgency that is uncharacteristic for this soft-spoken academic. “It’s a historical moment.” Ever since revelations emerged that Facebook had allowed people’s data to be misused by political operatives, Berners-Lee has felt an imperative to get this digital idyll into the real world. In a post published this weekend, Berners-Lee explains that he is taking a sabbatical from MIT to work full time on Inrupt. The company will be the first major commercial venture built off of Solid, a decentralized web platform he and others at MIT have spent years building.

A NETSCAPE FOR TODAY’S INTERNET
If all goes as planned, Inrupt will be to Solid what Netscape once was for many first-time users of the web: an easy way in. And like with Netscape, Berners-Lee hopes Inrupt will be just the first of many companies to emerge from Solid .


Well I’m not sure that the general concept of exponential rates of change and improvement are over - but definitely we are on the threshold of new paradigms including computational ones.
“Revolutionary new hardware architectures and new software languages, tailored to dealing with specific kinds of computing problems, are just waiting to be developed,” he said. “There are Turing Awards waiting to be picked up if people would just work on these things.”

It’s Time for New Computer Architectures and Software Languages

Moore’s Law is over, ushering in a golden age for computer architecture, says RISC pioneer
David Patterson—University of California professor, Google engineer, and RISC pioneer—says there’s no better time than now to be a computer architect.

That’s because Moore’s Law really is over, he says: “We are now a factor of 15 behind where we should be if Moore’s Law were still operative. We are in the post–Moore’s Law era.”

This means, Patterson told engineers attending the 2018 @Scale Conference held in San Jose last week, that “we’re at the end of the performance scaling that we are used to. When performance doubled every 18 months, people would throw out their desktop computers that were working fine because a friend’s new computer was so much faster.”

But last year, he said, “single program performance only grew 3 percent, so it’s doubling every 20 years. If you are just sitting there waiting for chips to get faster, you are going to have to wait a long time.”


Imagine an AI monitored hospital - real-time-all-the-time monitoring of patients. This is a good signal of the possibilities.
ICUs are among the most expensive components of the health care system. About 55,000 patients are cared for in an ICU every day, with the typical daily cost ranging from US $3,000 to $10,000. The cumulative cost is more than $80 billion per year.

AI Could Provide Moment-by-Moment Nursing for a Hospital’s Sickest Patients

In the intensive care unit, artificial intelligence can keep watch at a patient’s bedside
In a hospital’s intensive care unit (ICU), the sickest patients receive round-the-clock care as they lie in beds with their bodies connected to a bevy of surrounding machines. This advanced medical equipment is designed to keep an ailing person alive. Intravenous fluids drip into the bloodstream, while mechanical ventilators push air into the lungs. Sensors attached to the body track heart rate, blood pressure, and other vital signs, while bedside monitors graph the data in undulating lines. When the machines record measurements that are outside of normal parameters, beeps and alarms ring out to alert the medical staff to potential problems.

While this scene is laden with high tech, the technology isn’t being used to best advantage. Each machine is monitoring a discrete part of the body, but the machines aren’t working in concert. The rich streams of data aren’t being captured or analyzed. And it’s impossible for the ICU team—critical-care physicians, nurses, respiratory therapists, pharmacists, and other specialists—to keep watch at every patient’s bedside.

The ICU of the future will make far better use of its machines and the continuous streams of data they generate. Monitors won’t work in isolation, but instead will pool their information to present a comprehensive picture of the patient’s health to doctors. And that information will also flow to artificial intelligence (AI) systems, which will autonomously adjust equipment settings to keep the patient in optimal condition.


This is an very important signal regarding the enhancement of human capability.

The first “social network” of brains lets three people transmit thoughts to each other’s heads

BrainNet allows collaborative problem-solving using direct brain-to-brain communication.
The ability to send thoughts directly to another person’s brain is the stuff of science fiction. At least, it used to be.

In recent years, physicists and neuroscientists have developed an armory of tools that can sense certain kinds of thoughts and transmit information about them into other brains. That has made brain-to-brain communication a reality.

These tools include electroencephalograms (EEGs) that record electrical activity in the brain and transcranial magnetic stimulation (TMS), which can transmit information into the brain.

In 2015, Andrea Stocco and his colleagues at the University of Washington in Seattle used this gear to connect two people via a brain-to-brain interface. The people then played a 20 questions–type game.

An obvious next step is to allow several people to join such a conversation, and today Stocco and his colleagues announced they have achieved this using a world-first brain-to-brain network. The network, which they call BrainNet, allows a small group to play a collaborative Tetris-like game. “Our results raise the possibility of future brain-to-brain interfaces that enable cooperative problem-solving by humans using a ‘social network’ of connected brains,” they say.


This is an important report on the current state of the ‘Platform Economy’ from JPMorgan Chase.

The Online Platform Economy in 2018 - Drivers, Workers, Sellers, and Lessors

Technological innovation is transforming economic exchange. Just a decade ago, the Online Platform Economy comprised a handful of marketplaces connecting independent sellers to buyers of physical goods. Today, many consumers use software platforms to procure almost any kind of good or service from independent suppliers as a routine part of daily life. Have these innovations created viable new options for making a living?

The Online Platform Economy is growing. As it grows, its sectors are diverging in important ways, raising the question as to whether they require tailored policy approaches. While freelance driving has been the engine of growth for the Online Platform Economy, it is not a full time job for most participants. In fact, alongside the rapid growth in the number of drivers has come a steady decline in average monthly earnings. Non-transportation work platforms continue to innovate on the types of contracts between independent suppliers and their customers. In selling and leasing sectors, high platform earnings are concentrated among a few participants. More broadly, we do not find evidence that the Online Platform Economy is replacing traditional sources of income for most families. Taken together, our findings indicate that regardless of whether or not platform work could in principle represent the “future of work,” most participants are not putting it to the type of use that would usher in that future.


Speaking of new forms of architecture this is a good signal of the emerging or distributed ledger systems for tracking things with a transparent provenance.
"Our customers deserve a more transparent supply chain," said Frank Yiannas, vice president of food safety for Walmart, in a statement Monday. "This is a smart, technology-supported move that will greatly benefit our customers and transform the food system."

Walmart picks blockchain to track food safety with veggie suppliers

The company will be able to track exactly where that head of romaine lettuce came from.
Want to sell your vegetables through Walmart's formidably large network of stores? Better get on the blockchain.

Blockchain, a technology for creating a single ledger of transactions shared among the many parties involved, is potentially a big deal. That's certainly how Walmart sees it, requiring all suppliers of leafy green vegetables for Walmart's 5,358 stores to track their food with blockchain within one year to improve food safety.

Blockchain is in a bit of a rough patch right now, as some skeptics roll their eyes at attempts to cure all the world's ills with the technology. But that doesn't mean it's complete bunk. Walmart's endorsement shows a major corporate power believes that tracking food shipments can be a lot better with blockchain's distributed ledger.


This is a very important signal - perhaps suggesting that in the new ‘hardware architectures’ also should include software and human networking-participatory architectures - emulating a shift from single celled organisms to large multi-celled organisms. The extended mind concept creating more powerful collective intelligence and group consciousness.
“It went really well,” says Matthew Lungren, a pediatric radiologist at Stanford University Medical School, coauthor on the paper and one of the eight participants. “Before, we had to show [an X-ray] to multiple people separately and then figure out statistical ways to bring their answers to one consensus. This is a much more efficient and, frankly, more evidence-based way to do that.”
“We shouldn’t throw away human knowledge, wisdom, and experience,” says Louis Rosenberg, CEO and founder of Unanimous AI. “Instead, let’s look at how we can use AI to leverage those things.”

AI-Human “Hive Mind” Diagnoses Pneumonia

A small group of doctors moderated by AI algorithms made a more accurate diagnosis than individual physicians or AI alone
First, it correctly predicted the top four finishers at the Kentucky Derby. Then, it was better at picking Academy Award winners than professional movie critics—three years in a row. The cherry on top was when it prophesied that the Chicago Cubs would end a 108-year dry spell by winning the 2016 World Series—four months before the Cubs were even in the playoffs. (They did.)

Now, this AI-powered predictive technology is turning its attention to an area where it could do some real good—diagnosing medical conditions.

In a study presented on Monday at the SIIM Conference on Machine Intelligence in Medical Imaging in San Francisco, Stanford University doctors showed that eight radiologists interacting through Unanimous AI’s “swarm intelligence” technology were better at diagnosing pneumonia from chest X-rays than individual doctors or a machine-learning program alone.


This is a great must read signal supporting the need for AI-Human ‘hive mind’ applied to science and the publication of science research. There is too much to know and there is even more to learn from our failures. Especially for the complex and human sciences

‘Journalologists’ use scientific methods to study academic publishing. Is their work improving science?

The Chicago, Illinois, meeting marked the birth of what is now sometimes called journalology, a term coined by Stephen Lock, a former editor of The British Medical Journal (The BMJ). Its goal: improving the quality of at least a slice of the scientific record, in part by creating an evidence-based protocol for the path from the design of a study to its publication. That medical journals took a leading role isn't surprising. A sloppy paper on quantum dots has never killed anyone, but a clinical trial on a new cancer drug can mean the difference between life and death.

The field has grown steadily and has spurred important changes in publication practices. Today, for example, authors register a clinical trial in advance if they want it considered for publication in a major medical journal, so it doesn't vanish if the results aren't as hoped. And authors and journal editors often pledge to include in their papers details important for assessing and replicating a study. But almost 30 years on, plenty of questions remain, says clinical epidemiologist David Moher of The Ottawa Hospital Research Institute, a self-described journalologist. Moher—who once thought his dyslexia explained why he couldn't understand so much published research—wants to know whether the reporting standards that journals now embrace actually make papers better, for instance, and whether training for peer reviewers and editors is effective.

Finding the answers isn't easy. Journalology still hovers on the edge of respectable science, in part because it's often competing with medicine for dollars and attention. Journals are also tough to study and sometimes secretive, and old habits die hard. "It's hard," Moher says, "to be a disruptor in this area."


This is about more than Uber - it about how we collectively harness 'platforms' of costless coordination to be the public infrastructure of the 21st Century ... OK the first half of the 21st Century. This is filed under - imagine if Facebook had become a Foundation similar to Wikimedia?

What if Uber Was Owned and Governed by Its Drivers?

The rise of platform cooperatives
We have an epic choice before us between platform coops and Death Star platforms, and the time to decide is now. It might be the most important economic decision we ever make, but most of us don’t even know we have a choice.

And just what is a Death Star platform? Bill Johnson of StructureC3 referred to Uber and Airbnb as Death Star platforms in a recent chat. The label struck me as surprisingly apt: it reflects the raw ambition and focused power of these platforms, particularly Uber.

Uber’s big bet is global monopoly or bust. They’ve raised over $8 billion in venture capital, are on track to do over $10 billion in revenue this year, and have an estimated 200,000 drivers who are destroying the taxi industry in over 300 cities worldwide. They’ve done all this in just over five years. In fact, they reached a $51 billion valuation faster than Facebook, and plan to raise even more money. If they’re successful, they’ll become the most valuable startup in history. Airbnb is nearly as big and ambitious.

Platform coops are the alternative to Death Stars. As Lisa Gansky urged, these platforms share value with the people who make them valuable. Platform coops combine a cooperative business structure with an online platform to deliver a real-world service. What if Uber was owned and governed by its drivers? What if Airbnb was owned and governed by its hosts? That’s what an emerging movement is exploring for the entire sharing economy in an upcoming conference, Platform Cooperativism.


The self-driving car is still on the near-term horizon but self-driving transportation is emerging in the now.

Germany launches world's first autonomous tram in Potsdam

The Guardian goes for a ride on the new AI-driven Combino vehicle developed by Siemens
The world’s first autonomous tram was launched in unspectacular style in the city of Potsdam, west of Berlin, on Friday. The Guardian was the first English-language newspaper to be offered a ride on the vehicle developed by a team of 50 computer scientists, engineers, mathematicians, and physicists at the German engineering company Siemens.

Fitted with multiple radar, lidar (light from a laser), and camera sensors, forming digital eyes that film the tram and its surroundings during every journey, the tram reacts to trackside signals and can respond to hazards faster than a human.

Its makers say it is some way from being commercially viable but they do expect it to contribute to the wider field of driverless technology, and have called it an important milestone on the way to autonomous driving.


The key signal about the spread of AI in our digital environment is the speed of spreading a learning. When one person learns something important - one person has learned it and getting others to know what has been learned is the subject of pedagogy. But when one AI learns something - all instances of that AI also learn it. This latest signal is also an omen of the future of technological unemployment for many many. The inevitable surprises emerge when learnings can be combined.

MIT’s New Robot Taught Itself to Pick Things Up the Way People Do

Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items
appropriately—mimicking the way a baby gradually learns to use its hands.
Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.

This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.


The cyborg is here - only they are unevenly distributed. This is amazing news for some.
For Thomas, when she walked without help for the first time, “it was like watching fireworks, but from the inside,” she says. “Something I was never supposed to do ever just happened. It was awesome. There’s no other feeling like it in the world.” The device that Thomas calls “Junior” is a 16-electrode array that delivers electrical stimulation to her spinal cord. With intense training, and what Harkema calls “a whisper of an intent” from Thomas’ brain, the device has helped Thomas walk again.

TWO PEOPLE WITH PARALYSIS WALK AGAIN USING AN IMPLANTED DEVICE

‘It was like watching fireworks, but from the inside’
After Kelly Thomas’ truck flipped with her inside of it in 2014, she was told that she probably would never walk again. Now, with help from a spinal cord implant that she’s nicknamed “Junior,” Thomas is able to walk on her own.

Thomas and Jeff Marquis, who was paralyzed after a mountain biking accident, can now independently walk again after participating in a study at the University of Louisville that was published today in the New England Journal of Medicine. Thomas’ balance is still off and she needs a walker, but she can walk a hundred yards across grass. She also gained muscle and lost the nerve pain in her foot that has persisted since her accident. Another unnamed person with a spinal cord injury can now independently step across the ground with help from a trainer, according to a similar study at the Mayo Clinic that was also published today in the journal Nature Medicine.


Another signal of the emerging cyborg human 2.0 - but also ultimately the emergence of a ‘Social Credit’ world - retrieving (in a McLuhanesque sense) an neo-tribal word.

Why You’re Probably Getting a Microchip Implant Someday

Microchip implants are going from tech-geek novelty to genuine health tool—and you might be running out of good reasons to say no.
When Patrick McMullan first heard in early 2017 that thousands of Swedish citizens were unlocking their car doors and turning on coffee machines with a wave of their palm, he wasn’t too impressed. Sure, the technology—a millimeters-long microchip equipped with near-field communication capabilities and lodged just under the skin—had a niche, cutting-edge appeal, but in practical terms, a fob or passcode would work just as well.

McMullan, a 20-year veteran of the tech industry, wanted to do one better—to find a use for implantable microchips that was genuinely functional, not just abstractly nifty. In July 2017, news cameras watched as more than 50 employees at Three Square Market, the vending-solutions company where McMullan is president, voluntarily received chip implants of their own. Rather than a simple scan-to-function process like most of Sweden’s chips use, the chips and readers around Three Square Market’s River Falls, Wisconsin, office were all part of a multistage feedback network. For example: Your chip could grant you access to your computer—but only if it had already unlocked the front door for you that day. “Now,” McMullan says of last summer, “I’ve actually done something that enhances our network security.”

The problem McMullan’s chips cleverly solve is relatively small-scale—but it’s still a problem, and any potential new-use case represents a significant step forward for a chip evangelist like him. As with most technologies, the tipping point for implantable chips will come when they become so useful they’re hard to refuse. It could happen sooner than you think: In September 2017, Three Square Market launched an offshoot, Three Square Chip, that is developing the next generation of commercial microchip implants, with a slew of originative health features that could serve as the best argument yet that microchips’ benefits can outweigh our anxieties about them.


This is a great signal of emerging human enhancement - even though it’s not ready for primetime - this sort of technology is inevitable. Great for all sorts of activities.

Zooming contact lenses enlarge at a wink

A contact lens with a built in zoom that the wearer can switch at will between regular and telescopic vision could mean the end to dwindling independence for those with deteriorating eyesight, researchers suggested today. The rigid lens covers the majority of the front surface of the eye, including both the whites and the pupil, and contains an array of tiny aluminum mirrors that together can enlarge the world by 2.8x. Winking flips the view between regular and magnified.


Very interesting signal of the emerging capacity of domesticating DNA to enable a metabolic process of plastic management.
"Although the improvement is modest, this unanticipated discovery suggests that there is room to further improve these enzymes, moving us closer to a recycling solution for the ever-growing mountain of discarded plastics.

Researchers accidentally create mutant enzyme that eats plastic

An engineered enzyme that eats plastic could usher in a recycling revolution, scientists hope.
British researchers created the plastic-digesting protein accidentally while investigating its natural counterpart.

Tests showed that the lab-made mutant had a supercharged ability to break down polyethylene terephthalate (PET), one of the most popular forms of plastic employed by the food and drinks industry.

The new research sprang from the discovery of bacteria in a Japanese waste recycling centre that had evolved the ability to feed on plastic.
The bugs used a natural enzyme called PETase to digest PET bottles and containers.
While probing the molecular structure of PETase, the UK team inadvertently created a powerful new version of the protein.


Another good signal of progress on the development of new antibiotics.

There’s A New Antibiotic In Town, And We Can Create It In The Lab

A PROMISING DISCOVERY. Bacteria are shifty little things. We isolate antibiotics to kill them, and they evolve to dodge the attack. This is called antibiotic resistance, and it’s gotten so bad the United Nations (UN) actually declared it a crisis back in September 2016.

Now, a team of Chinese researchers think they’ve found a new weapon against antibiotic resistance: a fungal compound called albomycin δ2 that we can actually recreate in the lab.

They published their study Tuesday in the journal Nature Communications.
We knew from previous research that albomycins had antimicrobial properties, but it wasn’t until this research team dug into the fungal compounds that they learned that one specific compound, albomycin δ2, was especially adept at killing bacteria. It even outperformed a number of established antibiotics, including penicillin, when tested against the notoriously difficult to treat methicillin-resistant Staphylococcus aureus (MRSA).


It amazing what is emerging in the domains of biology and how we are constituted as WEs - multiple selves-as-a-self. It seems that we have at least three brains - the one in our skull, the one that defines our stomach and the one that is our intestines - and all of them - integrated.

Your gut is directly connected to your brain, by a newly discovered neuron circuit

The human gut is lined with more than 100 million nerve cells—it’s practically a brain unto itself. And indeed, the gut actually talks to the brain, releasing hormones into the bloodstream that, over the course of about 10 minutes, tell us how hungry it is, or that we shouldn’t have eaten an entire pizza. But a new study reveals the gut has a much more direct connection to the brain through a neural circuit that allows it to transmit signals in mere seconds. The findings could lead to new treatments for obesity, eating disorders, and even depression and autism—all of which have been linked to a malfunctioning gut.

The study reveals “a new set of pathways that use gut cells to rapidly communicate with … the brain stem,” says Daniel Drucker, a clinician-scientist who studies gut disorders at the Lunenfeld-Tanenbaum Research Institute in Toronto, Canada, who was not involved with the work. Although many questions remain before the clinical implications become clear, he says, “This is a cool new piece of the puzzle.”


This may be useful to anyone who wants to know more about searching on Google on it’s 20th birthday.

20 things you didn't know you could do with Search

Search has been helping you settle bets since 1998 (turns out there is some caffeine in decaf coffee), but now that we’re 20 years in, we’ve built dozens of other useful features and tools to help you get through your day. Let’s jump into some of these secret (and not-so-secret) tricks to up your Search game.

Search as your everyday sidekick
Here are some of the ways you can plan your day and stay in the know with Search.


A signal from the author of ‘Smart Mobs’ - a key Internet social media pioneer.

On quitting Facebook

I was one of the very first of the 2 billion Facebook users. I was teaching about social media at Stanford when Facebook was open only to Harvard, Princeton, and Stanford, so it only made sense for me to join right away. Like never needing paper maps and so many other changes that information technologies have brought to our lives, the changes on campus that Facebook brought seemed radical at the time, but are now just part of the landscape. Four years later — again, this was news back then, but just part of the landscape now — I was there when students were shocked that their drunken Facebook postings and photos affected their graduate school and employment prospects.

MySpace was a mess, but it was a creative mess, with users learning and borrowing from each other in a frenzy of customization.
My heart sank when Mark Z started talking about making Facebook “the social operating system of the web.” It was immediately obvious that a single, proprietary system would be a huge social and technical regression from the myriad ways people had been communicating on the social web.

As I have noted here before, I wrote in the mid 1990s about coming massive dataveillance — and very very few people seemed to care. I knew that Amazon was using data collected about my purchases and interests to suggest books to me, and although I recognized it as creeping dataveillance, I was one of many who not only didn’t care — I welcomed the convenience. When Facebook ads started trying to sell me things that I had looked at via other websites, that was creepy. But I did not comprehend what has come to be known as “surveillance capitalism” and the degree to which extremely detailed dossiers on the behavior of billions of users could be used to microtarget advertising (including, it turns out, to use gender and racial criteria to filter job and housing advertisements), and I don’t think anybody understood how that microtargetting could be joined with armies of bots and human trolls to influence public opinion, sow conflict, and (probably) swing elections.

It’s become clear that Facebook didn’t understand their inability to control such detailed surveillance of so many people, and to keep such a huge, widespread, and complex mechanism secure. It will never be secure. There will be breach after breach. They will never be able to fully automate and/or hire enough humans to protect the public sphere from computational propaganda that uses Facebook as a channel. (And in regard to the public sphere — does anybody really think anyone’s political opinions are changed by the sometimes vile and always time-wasting political arguments on Facebook?) And most recently, I learned that Facebook had sold my phone number to spammers after I had provided it (privately, I thought) to use to retrieve my account in case it was hacked.

I appreciate what Facebook enables for me. I have been able to stay in contact with people’s lives — many of whom would fall off my radar otherwise. I can share what I know, discover, and make. I can have fun. FB is a great source of gifs, videos, links, rants. I’ll be sorry to lose all that.


Well the irony.

Linux now dominates Azure

The most popular operating system on Microsoft's Azure cloud today is -- drumroll please -- Linux.
Three years ago, Mark Russinovich, CTO of Azure, Microsoft's cloud program, said, "One in four [Azure] instances are Linux." Then, in 2017, it was 40 percent Azure virtual machines (VM) were Linux. Today, Scott Guthrie, Microsoft's executive vice president of the cloud and enterprise group, said in an interview, "it's about half now, but it varies on the day because a lot of these workloads are elastic, but sometimes slightly over half of Azure VMs are Linux." Microsoft later clarified, "about half Azure VMs are Linux."

That's right. Microsoft's prize cloud, Linux, not Windows Server, is now, at least some of the time, the most popular operating system. Windows Server isn't going to be making a come back.
You see as Guthrie added, "Every month, Linux goes up."

Thursday, September 27, 2018

Friday Thinking 28 Sept 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning. Work that engages our whole self becomes play that works. Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:



How can we prepare ourselves and our children for a world of such unprecedented transformations and radical uncertainties? A baby born today will be thirty-something in 2050. If all goes well, that baby will still be around in 2100, and might even be an active citizen of the 22nd century. What should we teach that baby that will help him or her survive and flourish in the world of 2050 or of the 22nd century? What kind of skills will he or she need in order to get a job, understand what is happening around them and navigate the maze of life?

Unfortunately, since nobody knows how the world will look in 2050 – not to mention 2100 – we don’t know the answer to these questions. Of course, humans have never been able to predict the future with accuracy. But today it is more difficult than ever before, because once technology enables us to engineer bodies, brains and minds, we can no longer be certain about anything – including things that previously seemed fixed and eternal.

A thousand years ago, in 1018, there were many things people didn’t know about the future, but they were nevertheless convinced that the basic features of human society were not going to change. If you lived in China in 1018, you knew that by 1050 the Song Empire might collapse, the Khitans might invade from the north, and plagues might kill millions. However, it was clear to you that even in 1050 most people would still work as farmers and weavers, rulers would still rely on humans to staff their armies and bureaucracies, men would still dominate women, life expectancy would still be about 40, and the human body would be exactly the same. Hence in 1018, poor Chinese parents taught their children how to plant rice or weave silk, and wealthier parents taught their boys how to read the Confucian classics, write calligraphy or fight on horseback – and taught their girls to be modest and obedient housewives. It was obvious these skills would still be needed in 1050.

In contrast, today we have no idea how China or the rest of the world will look in 2050. We don’t know what people will do for a living, we don’t know how armies or bureaucracies will function, and we don’t know what gender relations will be like. Some people will probably live much longer than today, and the human body itself might undergo an unprecedented revolution thanks to bioengineering and direct brain-computer interfaces. Much of what kids learn today will likely be irrelevant by 2050.

Yuval Noah Harari on what the year 2050 has in store for humankind




‘I think, therefore I am,’ the 17th-century philosopher RenĂ© Descartes proclaimed as a first truth. That truth was rediscovered in 1887 by Helen Keller, a deaf and blind girl, then seven years of age: ‘I did not know that I am. I lived in a world that was a no world … When I learned the meaning of “I” and “me” and found that I was something,’ she later explained, ‘I began to think. Then consciousness first existed for me.’ As both these pioneers knew, a fundamental part of conscious experience is ‘inner speech’ – the experience of verbal thought, expressed in one’s ‘inner voice’. Your inner voice is you.

That voice isn’t the sound of anything. It’s not even physical – we can’t observe it or measure it in any direct way. If it’s not physical, then we can arguably only attempt to study it by contemplation or introspection; students of the inner voice are ‘thinking about thinking’, an act that feels vague. William James, the 19th-century philosopher who is often touted as the originator of American psychology, compared the act to ‘trying to turn up the gas quickly enough to see how the darkness looks’.

Yet through new methods of experimentation in the last few decades, the nature of inner speech is finally being revealed. In one set of studies, scans are allowing researchers to study the brain regions linked with inner speech. In other studies, researchers are investigating links between internal and external speech – that which we say aloud.

The inner voice




Basically, when you get to my age, you'll really measure your success in life by how many of the people you want to have love you actually do love you.

I know many people who have a lot of money, and they get testimonial dinners and they get hospital wings named after them. But the truth is that nobody in the world loves them.

That's the ultimate test of how you have lived your life. The trouble with love is that you can't buy it. You can buy sex. You can buy testimonial dinners. But the only way to get love is to be lovable. It's very irritating if you have a lot of money. You'd like to think you could write a check: I'll buy a million dollars' worth of love. But it doesn't work that way. The more you give love away, the more you get.

So let me get this straight: The most important lesson and "the ultimate test" of a life well-lived has nothing to do with money and everything to do with the most powerful emotion a human being can feel: love.

Warren Buffett Says Your Greatest Measure of Success at the End of Your Life Comes Down to 1 Word





An interesting 15 min TED Talks on the possibilities of AI - unleashing the human spirit from routine work and enabling a focus for humans to pursue 'labors of love'.

How AI can save our humanity

AI is massively transforming our world, but there's one thing it cannot do: love. In a visionary talk, computer scientist Kai-Fu Lee details how the US and China are driving a deep learning revolution -- and shares a blueprint for how humans can thrive in the age of AI by harnessing compassion and creativity. "AI is serendipity," Lee says. "It is here to liberate us from routine jobs, and it is here to remind us what it is that makes us human."  
For more on the presenter - Dr. Kai-Fu Lee  here’s his website
Also here he is talking with Peter Diamandis

Kai-Fu Lee + Future of AI

In this webinar, Peter speaks with Kai-Fu Lee - One of the world's most respected experts on AI. The discussion covers how Artificial Intelligence is reshaping the world as we know it. Topics include - How AI will transform every major industry and impact our lives, and if technology is killing jobs, and robot and human co-existence.


This is a great signal of how humans augmented with AI are better then each by themselves.
“It went really well,” says Matthew Lungren, a pediatric radiologist at Stanford University Medical School, coauthor on the paper and one of the eight participants. “Before, we had to show [an X-ray] to multiple people separately and then figure out statistical ways to bring their answers to one consensus. This is a much more efficient and, frankly, more evidence-based way to do that.”

AI-Human “Hive Mind” Diagnoses Pneumonia

A small group of doctors moderated by AI algorithms made a more accurate diagnosis than individual physicians or AI alone
First, it correctly predicted the top four finishers at the Kentucky Derby. Then, it was better at picking Academy Award winners than professional movie critics—three years in a row. The cherry on top was when it prophesied that the Chicago Cubs would end a 108-year dry spell by winning the 2016 World Series—four months before the Cubs were even in the playoffs. (They did.)

Now, this AI-powered predictive technology is turning its attention to an area where it could do some real good—diagnosing medical conditions.

In a study presented on Monday at the SIIM Conference on Machine Intelligence in Medical Imaging in San Francisco, Stanford University doctors showed that eight radiologists interacting through Unanimous AI’s “swarm intelligence” technology were better at diagnosing pneumonia from chest X-rays than individual doctors or a machine-learning program alone.

It was a small study, but the findings suggest that instead of replacing doctors, AI algorithms might work best alongside them in health care.


The advent of AI and robotics may eliminate many types of work - but it may also enable many to be able to work at jobs they weren’t previously able to do.
“I want to create a world in which people who can’t move their bodies can work too,” said Kentaro Yoshifuji, chief executive officer of Ory Lab. Inc., the developer of the robots.

Cafe utilizing robot waiters remotely controlled by disabled to open in Tokyo

A cafe will open in Tokyo’s Akasaka district in November featuring robot waiters remotely controlled from home by people with severe physical disabilities.
The cafe, which will be open on weekdays between Nov. 26 to Dec. 7, will deploy OriHime-D robots controlled by disabled people with conditions such as amyotrophic lateral sclerosis, a form of motor neuron disease.

The robot waiters, 1.2 meters tall and weighing 20 kg., will transmit video footage and audio via the internet, allowing their controllers to direct them from home via tablets or computers.

At an event marking the OriHime-D’s debut in August, a robot controlled by Nozomi Murata, who suffers from auto-phagic vacuolar myopathy that causes muscle weakness, asked a family if they would like some chocolate.


This is really a must view and read - some very interesting visuals of the system in action and a discussion of China’s Social Credit system from the point of view of two people. It seems to me that some form of social credit system is inevitable - the question is - will it support participatory democracy and egalitarian society or will it be a choice architecture for totalitarian surveillance society.

Leave no dark corner

China is building a digital dictatorship to exert control over its 1.4 billion citizens. For some, “social credit” will bring privileges — for others, punishment.
What may sound like a dystopian vision of the future is already happening in China. And it’s making and breaking lives.

The Communist Party calls it “social credit” and says it will be fully operational by 2020. Within years, an official Party outline claims, it will “allow the trustworthy to roam freely under heaven while making it hard for the discredited to take a single step”.

Social credit is like a personal scorecard for each of China’s 1.4 billion citizens.
In one pilot program already in place, each citizen has been assigned a score out of 800. In other programs it’s 900.
Those, with top “citizen scores” get VIP treatment at hotels and airports, cheap loans and a fast track to the best universities and jobs.

Those at the bottom can be locked out of society and banned from travel, or barred from getting credit or government jobs.

The system will be enforced by the latest in high-tech surveillance systems as China pushes to become the world leader in artificial intelligence.
Surveillance cameras will be equipped with facial recognition, body scanning and geo-tracking to cast a constant gaze over every citizen.


A possible alternate view is a fragmentation of governance though emerging forms of criminal and other conflict.

The Coming Crime Wars

Future conflicts will mostly be waged by drug cartels, mafia groups, gangs, and terrorists. It is time to rethink our rules of engagement.
Wars are on the rebound. There are twice as many civil conflicts today, for example, as there were in 2001. And the number of nonstate armed groups participating in the bloodshed is multiplying. According to the International Committee of the Red Cross (ICRC), roughly half of today’s wars involve between three and nine opposing groups. Just over 20 percent involve more than 10 competing blocs. In a handful, including ongoing conflicts in Libya and Syria, hundreds of armed groups vie for control. For the most part, these warring factions are themselves highly fragmented, and today’s warriors are just as likely to be affiliated with drug cartels, mafia groups, criminal gangs, militias, and terrorist organizations as with armies or organized rebel factions.

This cocktail of criminality, extremism, and insurrection is sowing havoc in parts of Central and South America, sub-Saharan and North Africa, the Middle East, and Central Asia. Not surprisingly, these conflicts are defying conventional international responses, such as formal cease-fire negotiations, peace agreements, and peacekeeping operations. And diplomats, military planners, and relief workers are unsure how best to respond. The problem, it seems, is that while the insecurity generated by these new wars is real, there is still no common lexicon or legal framework for dealing with them. Situated at the intersection of organized crime and outright war, they raise tricky legal, operational, and ethical questions about how to intervene, who should be involved, and the requisite safeguards to protect civilians.


The Internet is in crisis - slowly becoming enclosed by corporate privateers and all of us should be considering transforming the Internet into public infrastructure. That said - Google is working hard to bring the Internet to areas where for-profit enterprises and competition among incumbents haven’t delivered.
"Instead of one balloon utilizing one ground-based connection point to serve users, we can use that same terrestrial access point to activate a network of multiple balloons, all of which can connect people below," explained Loon's head of engineering, Salvatore Candido.

Alphabet's Loon balloons just beamed the internet across 1000km

Loon engineers can now boost internet coverage using a web of balloons connected to a single ground access point.
Loon, the former Google X project and now independent Alphabet company, has developed an antenna system that could create a far greater ground coverage than previously possible.

According to Loon each of its balloons, from 20km above earth, can cover an area of about 80km in diameter and serve about 1,000 users on the ground using an LTE connection. However, Loon balloons need a backhaul connection from an access point on the ground and without that connection the balloons can't provide connectivity to users on the ground.

But on Tuesday the company revealed it had sent data across a network of seven balloons from a single ground connection spanning a distance of 1,000 kilometers, or about 621 miles.

It also achieved its longest ever point-to-point link, sending data between two balloons over a distance of 600km.


A good signal confirming McLuhan’s notion that the earth is now within a human-built environment - as as such is now a work of art - thus we must accept the challenge of becoming worthy artists. But we may also have to become improv artists in order to step-up to our response-ability to the continual challenge of complex change

NASA probe will track melting polar ice in unprecedented detail

The Ice, Cloud and land Elevation Satellite can measure changes in ice thickness to within half a centimetre.
NASA is set to launch its most advanced global ice-monitoring satellite, which has been in the works for nearly a decade.

The agency plans to send the Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) into space on 15 September atop a Delta 2 rocket from the Vandenberg Air Force Base in California. It will focus on measuring changes in ice thickness in places including Greenland and Antarctica, but it will also collect data on forest growth and cloud height.

The satellite is designed to track seasonal and annual changes in ice thickness to within half a centimetre — a resolution greater than any previous elevation-monitoring satellite. The US$1-billion spacecraft will orbit 500 kilometres above Earth’s surface, and cover the globe every three months for the next three years.


Another signal about managing real-time information for complex systems bringing food security (on many levels) to bear on the entire food web.

How Blockchain Technology Could Track and Trace Food From Farm to Fork

IEEE standards working group is using the distributed ledger to improve and secure the supply chain
During a foodborne-illness outbreak, identifying the root cause of the contamination can be difficult. The food supply chain includes a lot of players—farmers, distributors, processors, packagers, and grocers—often from different regions or even multiple countries. Each business in the chain has its own private record-keeping system; some still use pen and paper.

An E. coli outbreak in the United States that began in April—the largest in more than a decade—came from tainted romaine lettuce that sickened more than 200 people in 36 states. It took government investigators two months to track the lettuce back to a grower in Yuma, Ariz.

Almost one in every 10 people around the world get a foodborne disease each year. Of those 600 million people, 420,000 die as a result, according to the World Health Organization.

IEEE and several other organizations are exploring how blockchain technology could track the source of an outbreak and help contain it. Rather than each company storing information in its own system, the businesses would contribute encrypted blocks of data to a distributed ledger that could be monitored and verified. Through data provenance, or proof of ownership, blockchain provides the ability to isolate where in the supply chain a problem occurred. This could lead to a more targeted recall process and create less risk for consumers and other stakeholders.


Bruce Sterling wrote ‘Shaping Things’ in which he coined the terms ‘Spime’ -  future-manufactured objects with informational support so extensive and rich that they are regarded as material instantiations of an immaterial system. Spimes are designed on screens, fabricated by digital means, and precisely tracked through space and time. They are made of substances that can be folded back into the production stream of future spimes, challenging all of us to become involved in their production. Spimes are coming, says Sterling. We will need these objects in order to live; we won't be able to surrender their advantages without awful consequences.
The concept of spime an object with a data tracker - like a near-field sensor or RFID chip - would enable people to track the life cycle of their goods - from factory to recycling. Perhaps we need to embrace such a concept - so that we don’t fall prey to artificially created scarcities. Then again maybe we will eventually have custom made products.

Why fashion brands destroy billions’ worth of their own merchandise every year

An expert explains why Burberry, H&M, Nike, and Urban Outfitters destroy unsold merch — and what it says about consumer culture.
The British luxury brand Burberry brought in $3.6 billion in revenue last year — and destroyed $36.8 million worth of its own merchandise.

In July 2018, the brand admitted in its annual report that demolishing goods was just part of its strategy to preserve its reputation of exclusivity.
Shoppers did not react well to this news. People vowed to boycott Burberry over its wastefulness, while members of Parliament demanded the British government crack down on the practice. The outrage worked: Burberry announced two weeks ago it would no longer destroy its excess product, effective immediately.

Yet Burberry is hardly the only company to use this practice; it runs high to low, from Louis Vuitton to Nike. Brands destroy product as a way to maintain exclusivity through scarcity, but the precise details of who is doing it and why are not commonly publicized. Every now and then, though, bits of information will trickle out. Last year, for example, a Danish TV station revealed that the fast-fashion retailer H&M had burned 60 tons of new and unsold clothes since 2013.


A signal of how to use AI on the big data we’ve been accumulating for decades.

Artificial intelligence helps track down mysterious cosmic radio bursts

Breakthrough Listen researchers used artificial intelligence to search through radio signals recorded from a fast radio burst, capturing many more than humans could. In fact, they now used machine learning to discover 72 new fast radio bursts from a mysterious source some 3 billion light years from Earth.

“This work is exciting not just because it helps us understand the dynamic behavior of fast radio bursts in more detail, but also because of the promise it shows for using machine learning to detect signals missed by classical algorithms,” said Andrew Siemion, director of the Berkeley SETI Research Center and principal investigator for Breakthrough Listen, the initiative to find signs of intelligent life in the universe.


For the ‘anything that can be automated - will be’ file. The question remains on not the speed of delivery but the quality of the coffee or other drinks.

Robotic Coffee Shop Cafe X Gets You Coffee in Under 30 Seconds

The robot invasion of the food and beverage industry is well underway from bots making pizza and ice cream  to android waiters delivering your food. The next target seems to be your daily coffee.

San Francisco’s CafĂ© X is a robotic coffee shop that hopes to revolutionize and speed up the coffee process by employing an army of robots. It’s not really an army, more of an assembly line that quickly puts together your coffee. We emphasize quickly as they make your drink in under 30 seconds. No need to worry about getting the coffee exactly how you asked, the robotic precision gets your order right every time.


Robotics and AI are best seen as human enhancers (not including what will be possible in the transition enabled by domesticating DNA).
“As of today, signals from the brain can be used to command and control … not just one aircraft but three simultaneous types of aircraft,” said Justin Sanchez, who directs DARPA’s biological technology office, at the Agency’s 60th-anniversary event in Maryland.
“The signals from those aircraft can be delivered directly back to the brain so that the brain of that user [or pilot] can also perceive the environment,” said Sanchez. “It’s taken a number of years to try and figure this out.”

It’s Now Possible To Telepathically Communicate with a Drone Swarm

DARPA’s new research in brain-computer interfaces is allowing a pilot to control multiple simulated aircraft at once.
A person with a brain chip can now pilot a swarm of drones — or even advanced fighter jets, thanks to research funded by the U.S. military’s Defense Advanced Research Projects Agency, or DARPA.

The work builds on research from 2015, which allowed a paralyzed woman to steer a virtual F-35 Joint Strike Fighter with only a small, surgically-implantable microchip. On Thursday, agency officials announced that they had scaled up the technology to allow a user to steer multiple jets at once.

More importantly, DARPA was able to improve the interaction between pilot and the simulated jet to allow the operator, a paralyzed man named Nathan, to not just send but receive signals from the craft.  

In essence, it’s the difference between having a brain joystick and having a real telepathic conversation with multiple jets or drones about what’s going on, what threats might be flying over the horizon, and what to do about them. “We’ve scaled it to three [aircraft], and have full sensory [signals] coming back. So you can have those other planes out in the environment and then be detecting something and send that signal back into the brain,” said Sanchez.  


The advance in domesticating DNA are bringing regular new insights to all domains of biology and even to the biocultural co-creating of beings - beyond the traditional nature-nurture polarity.
The very different individuals that arise from hive members’ identical genomes have made the honeybee “one of the most well-known and striking examples of phenotypic plasticity,” says Gene Robinson, a genomicist at the University of Illinois who was not involved in the study. Bees are a model, he says, for “understanding how environmental signals are transduced and then trigger these alternate developmental pathways.”

As Bees Specialize, So Does Their DNA Packaging

A study of chemical tags on histone proteins hints at how the same genome can yield very different animals.
The bee genome has a superpower. Not only can the exact same DNA sequence yield three types of insect—worker, drone, and queen—that look and behave very differently, but, in the case of workers, it dictates different sets of behaviors.

A key to the genome’s versatility seems to be epigenetic changes—chemical tags that, when added or removed from DNA, change the activity of a gene. Previous studies had shown distinct patterns of tags known as methyl groups on the genomes of bees performing different roles within their hives.

In a study published in Genome Research last month, Paul Hurd, an epigenetics researcher at Queen Mary University of London, and colleagues looked at a different type of epigenetic change: histone modifications. DNA wraps around histones, and chemical modifications to these proteins are thought to affect how available genes are to be transcribed.


And one more interesting signal about the rise of urban wildlife.

Urban bees are living healthier lives than rural bees

Bumblebees are making it in the city.
Research published in the Royal Society B found that bumblebees living in urban areas experience healthier lives than their counterparts in rural habitats. Their colonies are larger, better fed, and less prone to disease. Urban colonies also survive longer than their country cousins.

Researchers raised colonies of wild-caught bumblebees in agricultural and urban environments. Every week, the researchers would count the bee population in each hive three times.