Thursday, October 18, 2018

Friday Thinking 19 Oct 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning. Work that engages our whole self becomes play that works. Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:
Johann Wolfgang von Goethe - 1825 - Goethe’s Letters to Zelter.

Articles:



“I am most proud that we changed industrial robots forever, bringing them out of the cage and making them so that ordinary people could get robots to do new tasks and to tweak what they were doing without writing or reading a single line of code,” he told us.

“We also made it possible for hundreds of research groups around the world to have safe robot arms so that they could make rapid research progress using manipulation,” he added. “And we showed how real robot arms, with 35,000-hour lifetimes, could also be gentle enough to physically come into contact with humans—the consequences of this new class of robot are yet to be fully explored but it will be commonplace in just a few years.”

The collaborative robots market that Rethink helped pioneer proved highly competitive. And over the past several years another company, Universal Robots, came to dominate this space.

Brooks also cofounded iRobot and helped turn Roomba into the most popular consumer robot ever.
Now add Baxter to that list. The robot wasn’t a commercial success but it represents a major contribution from Brooks and his team: It was a milestone in bringing robots closer to people.

Rethink Robotics, Pioneer of Collaborative Robots, Shuts Down




Everything nowadays is ultra, everything is being transcended continually in thought as well as in action. No one knows himself any longer; no one can grasp the element in which he lives and works or the material that he handles. Pure simplicity is out of the question; of simplifiers we have enough. Young people are stirred up much too early in life and then carried away in the whirl of the times. Wealth and rapidity are what the world admires…. Railways, quick mails, steamships and every possible kind of rapid communication are what the educated world seeks but it only over-educates itself and thereby persists in its mediocrity. It is moreover, the result of universalization that a mediocre culture become common [culture]....

Johann Wolfgang von Goethe - 1825 - Goethe’s Letters to Zelter.





As scientists strive to make sense of ever more complex phenomena such as turbulence, then, perhaps it is worthwhile listening to what artists think about them. As Derges puts it, “I feel there will probably always be a movement back and forth between the controlled and chaotic environments of simulated and real fluid events in order to be able to make images that communicate something of the mystery of what lies behind the visible.” The most revealing images of flow patterns, she says, “need to be situated in between something that has been closely observed and something that has been emotionally experienced.”

That something which is “emotionally experienced” should find any place in science might horrify some scientists. It needn’t. We now know that emotional experience plays a significant role in cognition: it can be a part of what allows us to grasp the essence of what happens. There are researchers who already accept the value of this. Last fall, for example, physical oceanographer Larry Pratt of the Woods Hole Oceanographic Institution in Massachusetts and performing artist Liz Roncka led a workshop near MIT in Cambridge in which the participants, mostly mathematicians and scientists, were encouraged to dance their interpretation of turbulence. As Genevieve Wanucha, science writer for the “Oceans at MIT” program, reported, Pratt “was able to improvise complex movements that responded fluidly to the motion of his partner’s body, inspired by obvious intuition about turbulence.” Wanucha explains that Pratt uses dance “as a teaching tool to elegantly and immediately represent to the human mind how eddies transport heat, nutrients, phytoplankton or spilled oil down beneath the ocean surface.” His hope is that such an approach will help young scientists working on ocean flows to “gain a more intuitive understanding” of their work.

A feeling for flow




It is a sign of the times that one of the best-known moral claims by an American business is Google’s: “Don’t be evil.” At least they have one.  But it is interesting to reflect on. Put aside whether Google has lived up to its credo or not. How did we get to the point where the highest standard a business will hold itself to is simply the absence of evil?

And how did we get to a so-called “ethics” of business that insists that the only affirmative responsibility of a corporate executive is to maximize value for shareholders?

I believe that these corrosive moral claims derive from a fundamentally flawed understanding of how market capitalism works, grounded in the dubious assumption that human beings are “homo economicus”:  perfectly selfish, perfectly rational, and relentlessly self-maximizing. It is this behavioral model upon which all the other models of orthodox economics are built. And it is nonsense.

The last 40 years of research across multiple scientific disciplines has proven, with certainty, that homo economicus does not exist. Outside of economic models, this is simply not how real humans behave. Rather, Homo sapiens have evolved to be other-regarding, reciprocal, heuristic, and intuitive moral creatures. We can be selfish, yes—even cruel. But it is our highly evolved prosocial nature—our innate facility for cooperation, not competition—that has enabled our species to dominate the planet, and to build such an extraordinary—and extraordinarily complex—quality of life. Pro-sociality is our economic super power.

How to Destroy Neoliberalism: Kill ‘Homo Economicus’





This is a strong signal of some aspects of the future of some forms of work and of course play. Maybe a regular part of our lives.
Lockheed is expanding its use of augmented reality after seeing some dramatic effects during testing. Technicians needed far less time to get familiar with and prepare for a new task or to understand and perform processes like drilling holes and twisting fasteners.

NASA is using HoloLens AR headsets to build its new spacecraft faster

Lockheed Martin engineers wear the goggles to help them assemble the crew capsule Orion—without having to read thousands of pages of paper instructions.
When you work at a factory that pumps out thousands of a single item, like iPhones or shoes, you quickly become an expert in the assembly process. But when you are making something like a spacecraft, that comfort level doesn’t come quite so easily.

Traditionally, aerospace organizations have replied upon thousand-page paper manuals to relay instructions to their workers. In recent years, firms like Boeing and Airbus have started experimenting with augmented reality, but it’s rarely progressed beyond the testing phase. At Lockheed, at least, that’s changing. The firm’s employees are now using AR to do their jobs every single day.

Spacecraft technician Decker Jory uses a Microsoft HoloLens headset on a daily basis for his work on Orion, the spacecraft intended to one day sit atop the powerful—and repeatedly delayed—NASA Space Launch System. “At the start of the day, I put on the device to get accustomed to what we will be doing in the morning,” says Jory. He takes the headset off when he is ready to start drilling. For now, the longest he can wear it without it getting uncomfortable or too heavy is about three hours. So he and his team of assemblers use it to learn a task or check the directions in 15-minute increments rather than for a constant feed of instructions.


The looming emergence of self-driving transportation may be a perfect signal of network effects and the power of sharing information - where each self-driving vehicle can share what it learns immediately with all other self-driving vehicles whether they are virtual or actual.
“Let’s say you’re testing a scenario where there’s a jaywalker jumping out from a vehicle,” Dolgov says. “At some point it becomes dangerous to test it in the real world. This is where the simulator is incredibly powerful.”

Waymo’s cars drive 10 million miles a day in a perilous virtual world

A simulation lets autonomous cars experience situations that are too dangerous to try in reality.
You could argue that  Waymo, the self-driving subsidiary of Alphabet, has the safest autonomous cars around. It’s certainly covered the most miles. But in recent years, serious accidents involving early systems from Uber and Tesla have eroded public trust in the nascent technology. To win it back, putting in the miles on real roads just isn’t enough.

So today Waymo not only announced that its vehicles have clocked more than 10 million miles since 2009. It also revealed that its software now drives the same distance inside a sprawling simulated version of the real world every 24 hours—the equivalent of 25,000 cars driving 24/7. Waymo has covered more than 6 billion virtual miles in total.

This virtual test track is incredibly important to Waymo’s efforts to demonstrate that its cars are safe, says Dmitri Dolgov, the firm’s CTO. It lets engineers test the latest software updates on a wide variety of new scenarios, including situations that haven’t been seen on real roads. It also makes it possible to test scenarios that would be too risky to set up for real, like other vehicles driving recklessly at high speed.

Here’s a latest report from Google.

Where the next 10 million miles will take us

Our self-driving vehicles just crossed 10 million miles driven on public roads.
When it comes to driving, experience is the best teacher, and that experience is even more valuable when it’s varied and challenging. These millions of miles were driven in 25 cities across the United States: in sunny California, dusty Arizona, and snowy Michigan, and from the high-speed roads around Phoenix to the dense urban streets of San Francisco.

Our progress on public roads is made possible by our deep investment in simulation. By the end of the month, we’ll cross 7 billion miles driven in our virtual world (that’s 10 million miles every single day). In simulation, we can recreate any encounter we have on the road and make situations even more challenging through “fuzzing.” We can test new skills, refine existing ones, and practice extremely rare encounters, constantly challenging, verifying, and validating our software. We can learn exponentially through this combination of driving on public roads and simulation.

Thanks to nearly 10 years of experience, and keeping safety at the core of everything we do, we’ve been able to put the world’s first fleet of fully self-driving vehicles on the road. Safety is baked into how we drive today: we stay out of other driver’s blind spots, give wide berth to pedestrians, and come to a full stop at 4-way stops. In Phoenix, Arizona over 400 early riders use our app and ride in our cars, allowing them to get around town without the stress of driving and with the peace of mind that they’ll arrive safely.


This is a fascinating signal about the potential of new materials, ways of manufacturing that could contribute to mitigating climate change.
“This is a completely new concept in materials science,” says Strano, the Carbon C. Dubbs Professor of Chemical Engineering. “What we call carbon-fixing materials don’t exist yet today” outside of the biological realm, he says, describing materials that can transform carbon dioxide in the ambient air into a solid, stable form, using only the power of sunlight, just as plants do.
“Imagine a synthetic material that could grow like trees, taking the carbon from the carbon dioxide and incorporating it into the material’s backbone,”

Self-healing material can build itself from carbon in the air

Taking a page from green plants, new polymer “grows” through a chemical reaction with carbon dioxide.
A material designed by MIT chemical engineers can react with carbon dioxide from the air, to grow, strengthen, and even repair itself. The polymer, which might someday be used as construction or repair material or for protective coatings, continuously converts the greenhouse gas into a carbon-based material that reinforces itself.

The current version of the new material is a synthetic gel-like substance that performs a chemical process similar to the way plants incorporate carbon dioxide from the air into their growing tissues. The material might, for example, be made into panels of a lightweight matrix that could be shipped to a construction site, where they would harden and solidify just from exposure to air and sunlight, thereby saving on the energy and cost of transportation.

The finding is described in a paper in the journal Advanced Materials, by Professor Michael Strano, postdoc Seon-Yeong Kwak, and eight others at MIT and at the University of California at Riverside


While Rethink Robotics has died - Robotics is alive and progressing everywhere.
“Today we have more than 100,000 drive units deployed throughout our global fulfillment network. But many forget those aren’t the only robots we have. We have deployed around 30 palletizer systems, and another popular robot you might see in our fulfillment centers is called RoboStow—a 6-ton robot that has the ability to move pallets of products up to 24-feet high and directly onto the larger drive units.”
“One of the interesting challenges we have is how to deal with misplaced inventory in our fulfillment center. ... One of the simple ideas that has worked really well is to simply take a photo of the storage pod. By combining the photo with knowledge of what is supposed to be in every slot, we then use machine learning to assign a probability that a slot might not have the right inventory”

Brad Porter, VP of Robotics at Amazon, on Warehouse Automation, Machine Learning, and His First Robot

Amazon's chief roboticist discusses the latest advances in the field and how his team is using machine learning to make its robots smarter
Starting with its acquisition of Kiva Systems for $775 million back in 2012, Amazon has been steadily investing in a robotic future. From delivery drones to a rumored home robot to a robotics picking challenge, Amazon definitely wants useful, practical robots to happen. We’re not always sure that they’re going about it the right way, but we are always in favor of companies with as much clout as Amazon has recognizing that robotics is worth focusing on, especially with an understanding that some problems are going to take years of work to solve.

Brad Porter is the vice president of robotics at Amazon. He joined the company over a decade ago, initially working on Amazon’s web operations and e-commerce architecture. He later joined a team led by Jeff Wilke, chief executive of worldwide consumer, as a distinguished engineer, and during that time he oversaw technical preparations for Amazon’s first Prime Day and helped establish the Prime Air drone delivery organization. Porter earned bachelor’s and master’s degrees in computer science at MIT before joining Netscape and later helping start an early cloud technology company. Now leading the company’s robotics efforts, he oversees teams in Seattle, Boston, and Europe. He spoke with IEEE Spectrum via email.


Another strong signal about the transformation of medicine emerging with the progress of AI

Tencent Aims To Train AI To Spot Parkinson's In 3 Minutes

Chinese tech giant Tencent has teamed up with a London healthcare firm Medopad to develop artificial intelligence software that can diagnose Parkinson’s Disease in minutes.

The new AI system has been trained to spot Parkinson’s by looking at existing video footage of patients. The video analysis was done in collaboration with Kings College Hospital in London.

Dr Wei Fan, head of Tencent Medical AI Lab, said: “Tencent provides the AI technology and capabilities for the video analysis of Parkinson’s disease motor function which will be used in Medopad’s mobile medical application. This technology can help promote early diagnosis of Parkinson’s disease, screening, and daily evaluations of key functions.

“The goal of Tencent and Medopad’s collaboration is to help expand the remit of AI-powered movement assessment from sport and exercise to medicine and to reduce the cost of motor function assessment.”

The duo wants to reduce the time it takes to do the motor function assessment process from over 30 minutes down to less than 3 minutes. The test could potentially be done using smartphone technology developed by Medopad, eliminating the need for a hospital visit.


And one more signal of AI as medical partner.

Google’s AI is better at spotting advanced breast cancer than pathologists

The firm’s deep-learning tool was able to correctly distinguish metastatic cancer 99% of the time, a greater accuracy rate than human pathologists.

The system: The team trained an algorithm (named Lymph Node Assistant, or LYNA) to spot the features of tumors that have metastasized (that is, spread), which are notoriously difficult to detect. Of the half a million deaths worldwide caused by breast cancer, 90% are due to metastasis.

Gold standard: The 99% rate is superior to the performance of human pathologists, and the algorithm was also better at finding small metastases on individual slides. Human pathologists can miss these as much as 62% of the time when under time pressure, studies have shown.

A useful sidekick: Rather than replacing humans, this technology is more likely to complement their skills, making it easier and quicker to diagnose metastatic tumors. In one study, the algorithm halved the time it took to check a slide on average, cutting it to just one minute per slide.


This is an interesting longish article - well worth the read. While there is no doubt that humans contribute to climate change - there is lots of room for doubt about what to do and how accurate our predictions can be. Anyone who has studied complex systems knows the impossibility of prediction. But models while not being perfect for prediction are excellent and necessary to understand how complex systems can work.

Forests Emerge as a Major Overlooked Climate Factor

New work at the intersection of atmospheric science and ecology is finding that forests can influence rainfall and climate from across a continent.
the computer models that scientists rely on to predict the future climate don’t even come close to acknowledging the power of plants to move water on that scale, Swann said. “They’re tiny, but together they are mighty.”

Scientists have known since the late 1970s that the Amazon rainforest — the world’s largest, at 5.5 million square kilometers — makes its own storms. More recent research reveals that half or more of the rainfall over continental interiors comes from plants cycling water from soil into the atmosphere, where powerful wind currents can transport it to distant places. Agricultural regions as diverse as the U.S. Midwest, the Nile Valley and India, as well as major cities such as Sao Paulo, get much of their rain from these forest-driven “flying rivers.” It’s not an exaggeration to say that a large fraction of humanity’s diet is owing, at least in part, to forest-driven rainfall.

The world’s major forests, which contain hundreds of billions of trees, can move water on almost inconceivably large scales. Antonio Nobre, a climate scientist at Brazil’s National Institute for Space Research, has estimated, for example, that the Amazon rainforest discharges around 20 trillion liters of water per day — roughly 17 percent more than even the mighty Amazon River.

Such results also imply a profound reversal of what we would usually consider cause and effect. Normally we might assume that “the forests are there because it’s wet, rather than that it’s wet because there are forests,” said Douglas Sheil, an environmental scientist at the Norwegian University of Life Sciences campus outside Oslo. But maybe that’s all backward. “Could [wet climates] be caused by the forests?” he asked.


And of course in the process of domesticating DNA at least some things may bring us delight - the images are worth the view.

The Produce of the Future Could Taste Better, Reduce Waste, and Look Very, Very Cool

New York’s first Variety Showcase, held last week near Union Square, was like an Apple event for people who geek out over actual fruit. Instead of unveiling new iPhones, the goal of this Showcase — which was produced by the Oregon-based Culinary Breeding Network group and GrowNYC — was to help connect plant breeders, farmers, and chefs so that they can, in turn, create, grow, and cook fruits and vegetables that are better in every sense imaginable.

It might seem straightforward, but the system for essentially creating new produce — and making people excited to eat it — requires input at every stage of development. “Most farmers don’t grow their own seeds,” explains Lane Selman, the founder of CBN. “They help a seed live up to its potential, but they can’t control the traits within the seed.” That work is done by plant breeders, who can make specific adjustments that will affect the field performance, appearance, and nutritional value of a fruit or vegetable.

Where it gets exciting is when chefs work with the breeders to help create traits that are specifically appealing to cooks. To take just one example: Oregon State University breeder Jim Myers developed a new habanero pepper with rounded shoulders and straight sides after a panel of local chefs commented that this new shape would create less waste during prep. They can also work to make vegetables that taste more concentrated, or look more appealing.


It seems like the quanta of time just got much smaller - it will be amazing to see what can be seen.

World’s fastest camera freezes time at 10 trillion frames per second

What happens when a new technology is so precise that it operates on a scale beyond our characterization capabilities? For example, the lasers used at INRS produce ultrashort pulses in the femtosecond range (10–15 s) that are far too short to visualize. Although some measurements are possible, nothing beats a clear image, says INRS professor and ultrafast imaging specialist Jinyang Liang. He and his colleagues, led by Caltech’s Lihong Wang, have developed what they call T-CUP: the world’s fastest camera, capable of capturing ten trillion frames per second (Fig. 1). This new camera literally makes it possible to freeze time to see phenomena—and even light!—in extremely slow motion.


As we domesticate DNA our capacity to understand our own history accelerates - this is an interesting signal for the potential for positive benefit arising from DNA transfers.

Deep in Human DNA, a Gift From the Neanderthals

Long ago, Neanderthals probably infected modern humans with viruses, perhaps even an ancient form of H.I.V. But our extinct relatives also gave us genetic defenses.
People of Asian and European descent — almost anyone with origins outside of Africa — have inherited a sliver of DNA from some unusual ancestors: the Neanderthals.

These genes are the result of repeated interbreeding long ago between Neanderthals and modern humans. But why are those genes still there 40,000 years after Neanderthals became extinct?

As it turns out, some of them may protect humans against infections. In a study published on Thursday, scientists reported new evidence that modern humans encountered new viruses — including some related to influenza, herpes and H.I.V. — as they expanded out of Africa roughly 70,000 years ago.

Some of those infections may have been picked up directly from Neanderthals. Without immunity to pathogens they had never encountered, modern humans were particularly vulnerable.


This is a very strong signal of new approaches in our study of biology based on our domestication of DNA

HUMAN RETINAS GROWN IN A DISH EXPLAIN HOW COLOR VISION DEVELOPS

Lab-grown organoids reveal the mysterious process of eye tissue formation that takes place in the womb
Biologists at Johns Hopkins University grew human retinas from scratch to determine how cells that allow people to see in color are made.

The work, set for publication in the journal Science, lays the foundation to develop therapies for eye diseases such as color blindness and macular degeneration. It also establishes lab-created "organoids" as a model to study human development on a cellular level.

"Everything we examine looks like a normal developing eye, just growing in a dish," said Robert Johnston, a developmental biologist at Johns Hopkins. "You have a model system that you can manipulate without studying humans directly."


This is an important signal to all people concerned with software and interoperability and security of future developments. We may see other proprietary vendors following in these footsteps.
"We recognized open source is something that every developer can benefit from. It's not nice, it's essential. It's not just code, it's community. We don't just throw code on the website. We openly publish our roadmap, and we have 20,000 Microsoft employees on GitHub. With over 2,000 open-source projects, we're the largest open-source project supporter in the world."

Microsoft open-sources its patent portfolio

By joining the Open Invention Network, Microsoft is offering its entire patent portfolio to all of the open-source patent consortium's members.
Several years ago, I said the one thing Microsoft has to do -- to convince everyone in open source that it's truly an open-source supporter -- is stop using its patents against Android vendors. Now, it's joined the Open Invention Network (OIN), an open-source patent consortium. Microsoft has essentially agreed to grant a royalty-free and unrestricted license to its entire patent portfolio to all other OIN members.

Before Microsoft joined, OIN had more than 2,650 community members and owns more than 1,300 global patents and applications. OIN is the largest patent non-aggression community in history and represents a core set of open-source intellectual-property values. Its members include Google, IBM, Red Hat, and SUSE. The OIN patent license and member cross-licenses are available royalty-free to anyone who joins the OIN community.


And another signal regarding the deep advantages of open-source.

Jupyter, Mathematica, and the Future of the Research Paper

The Atlantic has a great article on new ways to share research results. Its three parts make three points:

- A graphical user interface (GUI) can facilitate better technical writing.
- Wolfram’s proprietary notebook showcased innovative technology, but decades after its introduction, still has few users.
- Jupyter is a new open-source alternative that is well on the way to becoming a standard for exchanging research results.

Each is spot on. I had to learn the hard way why so many kept their distance from Mathematica. Now, I’m much more productive with Jupyter. I’m experimenting with, and excited about, its potential as a way to write up research results.

The article asks why Jupyter succeed where Mathematica failed. The obvious contrast is between the proprietary world of Wolfram and the open-source model of the software ecosystem that Jupyter mobilizes.


The digital environment’s atmosphere of information can appear to be ever present - ever ubiquitous in our lives - but it is also ephemeral and effervescent - thus emerges the need for new forms of archival institutions.

The Internet’s keepers? “Some call us hoarders—I like to say we’re archivists”

Wayback Machine Director Mark Graham outlines the scale of everyone's favorite archive.
As much as subscription services want you to believe it, not everything can be found on Amazon or Netflix. Want to read Brett Kavanaugh buddy Mark Judge’s old book, for instance (or their now infamous yearbook even)? Curious to watch a bunch of vintage smoking ads? How about perusing the largest collection of Tibetan Buddhist literature in the world? There’s one place to turn today, and it’s not Google or any pirate sites you may or may not frequent.

“I’ve got government video of how to wash your hands or prep for nuclear war,” says Mark Graham, director of the Wayback Machine at the Internet Archive. “We could easily make a list of .ppt files in all the websites from .mil, the Military Industrial PowerPoint Complex.”

Graham recently talked with several small groups of attendees at the 2018 Online News Association conference, and Ars was lucky enough to be part of one. He later made a full presentation to the conference, which is now available in audio form. And the immediate takeaway is that the scale of the Internet Archive today may be as hard to fathom as the scale of the Internet itself.

The longtime non-profit’s physical space remains easy to comprehend, at least, so Graham starts there. The main operation now runs out of an old church (pews still intact) in San Francisco, with the Internet Archive today employing nearly 200 staffers. The archive also maintains a nearby warehouse for storing physical media—not just books, but things like vinyl records, too. That’s where Graham jokes the main unit of measurement is “shipping container.” The archive gets that much material every two weeks.

2 comments:

  1. Hi John
    I feel I am still stuck on Mondays.
    Where did you find the Goethe's quote?
    (I tortured Google ...in vain)

    ReplyDelete
  2. Hi Nicole - I found the Goethe quote in Geoffrey West's book "Scale: The Universal Laws of Life, Growth, and Death in Organisms, Cities, and Companies" - p. 328.

    ReplyDelete