Thursday, November 8, 2018

Friday Thinking 9 Nov 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning. Work that engages our whole self becomes play that works. Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:



Our future will be bright, fast—and full of robots. It’ll be more Asimov than Terminator: servant robots, more or less similar to us. Some will be upright androids, but most will be boxes filled with computer chips running software agents. And there will be a lot of them. Forecasts predict that, within just three years, we’ll have 1.7 million robots in industry, 32 million in our households, and 400,000 in professional offices.

Robots will begin to run our factories. Autonomous sensors will monitor infrastructure. Robots will order parts for themselves and raw materials for production. Logistics will be run by chains of unmanned vehicles stationed at autonomous bases. Factories will communicate with each other. Drone traffic control systems will request weather information from meteorological stations belonging to other companies.

All of this will be based on the exchange of information. Not just technical information—robots will need to develop and maintain economic relationships. Whether for a parts order or a service agreement with another company, many aspects of their work will revolve around currency transactions. Human operators will be too slow to oversee these transactions, which we can expect to happen at 20,000 transactions per second (assuming there is at least one robotic device per person). Therefore, for the future we are building, we will need to invent not just robots—but robot money and robot markets.

The Robot Economy Will Run on Blockchain




Building electric cars and reusable rockets is fairly easy. Building a nuclear fusion reactor, flying cars, self-driving cars, or a Hyperloop system is very hard. What makes the difference?

The answer, in a word, is experience. The difference between the possible and the practical can only be discovered by trying things out. Therefore, even though the physics suggests that a thing will work, if it has not even been demonstrated in the lab you can consider that thing to be a long way off. If it has been demonstrated in prototypes only, then it is still distant. If versions have been deployed at scale, and most of the necessary refinements are of an evolutionary character, then perhaps it may become available fairly soon. Even then, if no one wants to use the thing, it will languish in the warehouse, no matter how much enthusiasm there is among the technologists who developed it.

Here I present a short list of technology projects that are now under way or at least under serious discussion. In each case I’ll point out features that tend to make a technology easy or hard to bring to market.

Rodney Brooks Rules for Predicting Technology Commercial Success





It seems that the post-modern primitive - the cadre of people with tattoos, piercing, body modifications (including cosmetic surgeries) and implants is growing. A signal of the emerging possibilities of the cyborg and enhanced human in capability and with new senses integrated in an expanding sensorium. This is worth the consideration - surely more of our children will be drawn to incorporate some of these technologies (including DNA therapies, synthetic organs and more).
"The first thing I felt upon receiving the vibrations was a burning sensation followed by a feeling of satisfaction. It was similar to what you might feel when getting a tattooing but more intense. Then I felt happiness and pain at the same time," he added. "Cyborg technology is offering us a look into the unknown. My purpose with this project is to perceive the nonphysical or paranormal so I can find further avenues of self-development."

this artist got a piece of machinery implanted in his cheekbones

i-D meets the Spanish cyborg pushing the boundaries of body modification.
The more entrenched technology becomes in our lives, the more people are beginning to think that cyborgs are just the logical next step in the evolution of humanity. Naturally, some people are choosing to put themselves at the forefront of this, by converting parts of their bodies to machinery.

One of those guys is artist Joe Dekni, who last week got a piece of machinery implanted in his cheekbones. The artificial organ was based on the echolocation sonar used by bats to identify objects in their environment, and is meant to allow Dekni to feel the vibrations of his surroundings. The operation took place at the Transpecies Society space in Barcelona and was part of a performance piece that also included an audiovisual installation.

At 22, Joe calls himself an "artist, or perhaps an alchemist," and attributes the ability to introduce this permanent addition to his body to Neil Harbisson — the first person to have been legally recognized as a cyborg by a government. "I was intrigued by the idea of being able to perceive the paranormal or the invisible. I decided to develop my sense of echolocation, which animals like bats or dolphins already have naturally," he explained to i-D Spain after the performance. The artist decided to make his operation public in an attempt to demonstrate that furthering your senses is just another option that people can have today — "just another way of living," as he put it.


Adam Smith first used the term ‘the invisible hand’ in his first book “A Theory of Moral Sentiments” - everyone want to be praiseworthy as well as blameless - and so we ask ourselves the question (well sometimes we do) “What will others think if I should do….?” The digital environment enable the invisible hand to become a very visible ‘velvet glove’.

Spend “frivolously” and be penalized under China’s new social credit system

People who waste money on non-essentials or behave “badly” are penalized under the controversial new ranking.
In 2020, China will fully roll out its controversial social credit score. Under the system, both financial behaviors like “frivolous spending” and bad behaviors like lighting up in smoke-free zones can result in stiff consequences. Penalties include loss of employment and educational opportunities, as well as transportation restrictions. Those with high scores get perks, like discounts on utility bills and faster application processes to travel abroad.

China is currently piloting the program and some citizens have already found themselves banned from traveling or attending certain schools due to low scores. These ramifications have led to a flurry of recent criticism from both human rights groups and the press. This week alone, news outlets like Business Insider and National Public Radio weighed in on China’s social credit score and the stratified society it may foster in the communist country.

The outcry about China’s social credit score is understandable, given that the country’s authoritarian regime leaves citizens with little recourse to challenge the new system. But concerns about China’s credit system have overlooked how the US system also divides consumers along class lines — and has done so for decades. Social behaviors may not factor into US credit scores, but the idea that a person’s financial history reflects trustworthiness has long influenced employment decisions and other factors that affect Americans’ quality-of-life.


Does the boomer generation tend to trust media more or less then younger generations? Have boomers been conditioned to believe the ‘Walter Cronkite's’ of media?

Older People Are Worse Than Young People at Telling Fact From Opinion

Given five facts, only 17 percent of people over 65 were able to identify them all as factual statements.
Americans over 50 are worse than younger people at telling facts from opinions, according to a new study by Pew Research Center.

Given 10 statements, five each of fact and opinion, younger Americans correctly identified both the facts and the opinions at higher rates than older Americans did. Forty-four percent of younger people identified all five opinions as opinions, while only 26 percent of older people did. And 18-to-29-year-olds performed more than twice as well as the 65+ set. Of the latter group, only 17 percent classified all five facts as factual statements.


This is a fun website with lots of signal of the emerging world of Robots and AI - worth the view for anyone interested in the accelerating of diversity - a true robot ecology.

ROBOTS

YOUR GUIDE TO THE WORLD OF ROBOTICS
Robots are a diverse bunch. Some walk around on their two, four, six, or more legs, while others can take to the skies. Some robots help physicians to do surgery inside your body; others toil away in dirty factories. There are robots the size of a coin and robots bigger than a car. Some robots can make pancakes. Others can land on Mars.

This diversity—in size, design, capabilities—means it’s not easy to come up with a definition of what a robot is.


Here’s a nice 2 min video of a diving robot.

Stanford's humanoid robot explores an abandoned shipwreck

The robot, called OceanOne, is powered by artificial intelligence and haptic feedback systems, allowing human pilots an unprecedented ability to explore the depths of the oceans in high fidelity.


A 22 page report on the Future of Work by the World Economic Forum

Eight Futures of Work: Scenarios and their Implications

Eight Futures of Work: Scenarios and their Implications presents various possible visions of what the future of work might look like by the year 2030. Based on how different combinations of three core variables—the rate of technological change and its impact on business models; the evolution of learning among the current and future workforce; and the magnitude of talent mobility across geographies—are likely to influence the nature of work in the future, the White Paper provides a starting point for considering a range of options around the multiple possible futures of work.

It is imperative that governments, businesses, academic institutions and individuals consider how to proactively shape a new, positive future of work—one that we want rather than one created through inertia. Accordingly, while the scenarios presented in this White Paper are designed to create a basis for discussion among policy-makers, businesses, academic institutions and individuals, they are not predictions. Instead, they are a practical tool to help identify and prioritize key actions that are likely to promote the kind of future that maximizes opportunities for people to fulfil their full potential across their lifetimes.


Anyone who watched Caprica a prequel television series to the re-imagined Battlestar Galactica will be familiar with the following signal - life imitating art or Science Fiction creating the imagination for invention - You gotta love the concept of augmented eternity.
While most older people haven’t amassed enough digital detritus to build a working artificial intelligence, Rahnama posits that in the next few decades, as we continue to create our digital footprints, millennials will have generated enough data to make it feasible. Even as we speak, the digital remains of the dead accumulate. Something like 1.7 million Facebook users pass away each year. Some online accounts of the dead are deleted, while others linger in perpetual silence. “We are generating gigabytes of data on a daily basis,” Rahnama says. “We now have a lot of data, we have a lot of processing power, we have a lot of storage capability.” With enough data about how you communicate and interact with others, machine-learning algorithms can approximate your unique personality—or at least some part of it.
the main complication with trying to create digital versions of the dead is that people are complicated. “We’re extremely different when we talk to different people,” she says. “We’re basically like twenty thousand personalities at once.”
Augmented Eternity will take a step toward accommodating various personalities by tailoring the conversation according to context and letting users control what data is accessible to whom.

Digital immortality: How your life’s data means a version of you could live forever

Your family and friends will be able to interact with a digital “you” that doles out advice—even when you’re gone.
Hossein Rahnama knows a CEO of a major financial company who wants to live on after he’s dead, and Rahnama thinks he can help him do it.

Rahnama is creating a digital avatar for the CEO that they both hope could serve as a virtual “consultant” when the actual CEO is gone. Some future company executive deciding whether to accept an acquisition bid might pull out her cell phone, open a chat window, and pose the question to the late CEO. The digital avatar, created by an artificial-intelligence platform that analyzes personal data and correspondence, might detect that the CEO had a bad relationship with the acquiring company’s execs. “I’m not a fan of that company’s leadership,” the avatar might say, and the screen would go red to indicate disapproval.

Creepy? Maybe, but Rahnama believes we’ll come to embrace the digital afterlife. An entrepreneur and researcher based at Ryerson University in Toronto, and a visiting faculty member at MIT’s Media Lab, he’s building an application called Augmented Eternity; it lets you create a digital persona that can interact with people on your behalf after you’re dead.


Here a signal of something that will inevitably become widespread - another AI application.

An AI Lie Detector Is Going to Start Questioning Travelers in the EU

A number of border control checkpoints in the European Union are about to get increasingly—and unsettlingly—futuristic.

In Hungary, Latvia, and Greece, travelers will be given an automated lie-detection test—by an animated AI border agent. The system, called iBorderCtrl, is part of a six-month pilot led by the Hungarian National Police at four different border crossing points.

“We’re employing existing and proven technologies—as well as novel ones—to empower border agents to increase the accuracy and efficiency of border checks,” project coordinator George Boultadakis of European Dynamics in Luxembourg told the European Commission. “iBorderCtrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.”

The virtual border control agent will ask travelers questions after they’ve passed through the checkpoint. Questions include, “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?” according to New Scientist. The system reportedly records travelers’ faces using AI to analyze 38 micro-gestures, scoring each response. The virtual agent is reportedly customized according to the traveler’s gender, ethnicity, and language.


We are all braced for the world of deepfake news - images and sounds modified in undetectable ways - here’s a positive signal of the ongoing honest-fake arms race. The fake video is worth the view for a laugh.
As an added layer of trust and protection, Truepic also stores all photos and metadata using a blockchain—the technology behind Bitcoin that combines cryptography and distributed networking to securely store and track information.

Deepfake-busting apps can spot even a single pixel out of place

Two startups are using algorithms to track when images are edited—from the moment they’re taken.
Falsifying photos and videos used to take a lot of work. Either you used CGI to generate photorealistic images from scratch (both challenging and expensive) or you needed some mastery of Photoshop—and a lot of time—to convincingly modify existing pictures.

Now the advent of AI-generated imagery has made it easier for anyone to tweak an image or a video with confusingly realistic results. Earlier this year, MIT Technology Review senior AI editor Will Knight used off-the-shelf software to forge his own fake video of US senator Ted Cruz. The video is a little glitchy, but it won’t be for long.

Two startups, US-based Truepic (which Farid consults for) and UK-based Serelay, are now working to commercialize this idea. They have taken similar approaches: each has free iOS and Android camera apps that use proprietary algorithms to automatically verify photos when taken. If an image goes viral, it can be compared against the original to check whether it has retained its integrity.

While Truepic uploads its users’ images and stores them in its servers, Serelay stores a digital fingerprint of sorts by computing about a hundred mathematical values from each image. (The company claims that these values are enough to detect even a single-pixel edit and determine approximately what section of the image was changed.) Truepic says they choose to store the full images in case users want to delete sensitive photos for safety reasons. (In some instances, Truepic users operating in high-threat scenarios, like a war zone, need to remove the app immediately after they document scenes.) Serelay, in contrast, believes that not storing the photos affords users greater privacy.


Imagine a Universal Basic Income - What would people do? Reflect on who created Wikimedia and the Internet itself. Imagine unleashing the innate curiosity of people with the tools, time, and empowerment to pursue the curious?
Citizen science — active public involvement in scientific research — is growing bigger, more ambitious and more networked. Beyond monitoring pollution and snapping millions of pictures of flora and fauna, people are building Geiger counters to assess radiation levels, photographing stagnant water to help document the spread of mosquito-borne disease, and taking videos of water flow to calibrate flood models. And an increasing number are donating thinking time to help speed up meta-analyses or assess images in ways that algorithms cannot yet match.

No PhDs needed: how citizen science is transforming research

Projects that recruit the public are getting more ambitious and diverse, but the field faces some growing pains.
As a biogeochemist at the University of Antwerp in Belgium, Meysman wasn’t used to drawing so much attention. But that was before he adopted the citizens of northern Belgium as research partners. With the help of the Flemish environmental protection agency and a regional newspaper, Meysman and a team of non-academics attracted more than 50,000 people to CurieuzeNeuzen, an effort to assess the region’s air quality (the name is a play on Antwerp dialect for ‘nosy’ people).

The project ultimately distributed air-pollution samplers to 20,000 participants, who took readings for a month (see ‘Street science’). More than 99% of the sensors were returned to Meysman’s laboratory for analysis, yielding a bounty of 17,800 data points. They provided Meysman and his colleagues with information about nitrogen dioxide concentrations at ‘nose height’ — a level of the atmosphere that can’t be discerned by satellite and would be prohibitively expensive for scientists to measure on their own. “It has given us a data set which it is not possible to get by other means,” says Meysman, who models air quality.


This is a signal to definitely watch - the beginning of a world DNA sequencing census - within the next five years - which country will engage in a human genome sequencing census? As important as any species is - the vitality and viability of future life is in a robust and highly diverse gene pool. It must be remembered that genes have a promiscuity all their own - through several means including horizontal gene transfer.
“Having the roadmap, the blueprints … will be a tremendous resource for new discoveries, understanding the rules of life, how evolution works, new approaches for the conservation of rare and endangered species, and … new resources for researchers in agricultural and medical fields,”

$5bn project to map DNA of every animal, plant and fungus

International sequencing drive will involve reading genomes of 1.5m species
An ambitious international project to sequence the DNA of every known animal, plant and fungus in the world over the next 10 years has been launched.

Described as “the next moonshot for biology”, the Earth BioGenome Project is expected to cost $4.7bn (£3.6bn) and involve reading the genomes of 1.5m species.
Prof Harris Lewin of the University of California, Davis, who chairs the project, said it could be as transformational for biology as the Human Genome Project, which decoded the human genome between 1990 and 2003.

Currently, fewer than 3,500, or about 0.2%, of all known eukaryotic species have had their genome sequenced, with fewer than 100 at reference quality. The aim to sequence all known species is a major international effort, involving a US-led project to sequence the genetic code of tens of thousands of vertebrates, a Chinese project to sequence 10,000 plant genomes, and the Global Ant Genomes Alliance, which aims to sequence around 200 ant genomes.

The Wellcome Sanger Institute will lead the effort to sequence the genetic codes of all 66,000 species known to inhabit Britain, including red and grey squirrels and the European robin.
The total volume of biological data that will be gathered is expected to be on the “exascale” – more than that accumulated by Twitter, YouTube or the whole of astronomy.


Domesticating DNA and using mushrooms as a platform - what possibilities await?

Can mushrooms be the platform we build the future on?

Ecovative thinks it can use mycelia, the hair-like network of cells that grows in mushrooms, to help build everything from lab-grown meat to 3D-printed organs to biofabricated leather.
When the first bioreactor-grown “clean meat” shows up in restaurants–perhaps by the end of this year–it’s likely to come in the form of ground meat rather than a fully formed chicken wings or sirloin steak. While it’s possible to grow animal cells in a factory, it’s harder to grow full animal parts. One solution may come from fungi: Mycelia, the hair-like network of cells that grows in mushrooms, can create a scaffold to grow a realistic cut of meat.

“With our platform, we’re able to make these complex structures that have texture that you would cut with a knife and be like, wow, that actually has fibers in it, like meat structure,” says Eben Bayer, founder of Ecovative, a company that recently released a new mycelium-based “biofabrication platform.”

For the company, growing meat without livestock is just one of many applications of the platform. “It’s using nature as a molecular assembler,” Bayer says. Ecovative first launched a decade ago by making packaging, now used by Dell and Ikea, that injects farm waste products with mushroom spawn inside a mold. Days later, the mycelium completes the growth of the product, which can be used as a compostable alternative to Styrofoam. The same process can also be used to grow building materials.

The company’s new MycoFlex platform, can create higher-performing materials. The company is now beginning to license the process to other manufacturers. “Our intellectual property is in understanding the growth and the growth processes that’ll coax mycelium to create these very complex structures, do so repeatedly, and do so at scale,” Bayer says.


Is there a shortage of water? Not really - the question is about drinkable water - is there a shortage of drinkable water?
“One could imagine these shipping containers being positioned in a state of readiness throughout the world to be able to respond to disasters for both energy and water,”

A device that can pull drinking water from the air just won the latest XPrize

The winner of the Water Abundance XPrize creates enough water for 100 people every day by making an artificial cloud inside a shipping container.
A new device that sits inside a shipping container can use clean energy to almost instantly bring clean drinking water anywhere–the rooftop of an apartment building in Nairobi, a disaster zone after a hurricane in Manila, a rural village in Zimbabwe–by pulling water from the air.

The design, from the Skysource/Skywater Alliance, just won $1.5 million in the Water Abundance XPrize. The competition, which launched in 2016, asked designers to build a device that could extract at least 2,000 liters of water a day from the atmosphere (enough for the daily needs of around 100 people), use clean energy, and cost no more than 2¢ a liter.

“We do a lot of first principles thinking at XPrize when we start designing these challenges,” says Zenia Tata, who helped launch the prize and serves as chief impact officer of XPrize. Nearly 800 million people face water scarcity; other solutions, like desalination, are expensive. Freshwater is limited and exists in a closed system. But the atmosphere, the team realized, could be tapped as a resource. “At any given time, it holds 12 quadrillion gallons–the number 12 with 19 zeros after it–a very, very, big number,” she says. The household needs for all 7 billion people on earth add up to only around 350 or 400 billion gallons. A handful of air-to-water devices already existed, but were fairly expensive to use.


How to assess future possibles of a technology - a challenge for many of us.

The Rodney Brooks Rules for Predicting a Technology’s Commercial Success

A few key questions will help you distinguish winners from losers
Building electric cars and reusable rockets is fairly easy. Building a nuclear fusion reactor, flying cars, self-driving cars, or a Hyperloop system is very hard. What makes the difference?

It’s well worth considering what makes a potential technology easy or hard to develop, because a mistake can lead to unwise decisions. Take, for instance, the International Thermonuclear Experimental Reactor that’s now under construction in France at an estimated cost of US $22 billion. If governments around the world believe that this herculean effort will automatically lead to success and therefore to near-term commercial fusion reactors, and if they plan their national energy strategies around that assumption, their citizens may very well be disappointed.

Here I present a short list of technology projects that are now under way or at least under serious discussion. In each case I’ll point out features that tend to make a technology easy or hard to bring to market.


A new constellation has been formulated recognizing a mythical 20th Century fictional monster-hero. The images of the constellation are worth the view.

Godzilla constellation recognized by NASA claims a corner of space

When Godzilla made his screen debut in 1954, he was, like most Japanese media of the era, designed just for Japan. But in the decades since, the King of the Monsters has expanded his dominion, appearing in theaters around the world.

That international recognition even earned the kaiju his own star on the Hollywood Walk of Fame, but now Godzilla finds himself among not just movie stars, but celestial ones as well, as NASA has announced a Godzilla constellation.

The Godzilla constellation isn’t made up of stars, though. Instead, the astronomic artwork is formed of gamma rays, as observed by NASA’s Fermi Gamma-ray Space Telescope. Launched in 2008, NASA is marking the satellite’s decade in service by establishing 21 gamma-ray constellations, and similar to how stargazers of yore took inspiration from ancient legends, the space agency is saluting modern mythos with constellations referencing not only Godzilla, but also "Star Trek," "The Little Prince" and "The Incredible Hulk."

Thursday, November 1, 2018

Friday Thinking 2 Nov 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning. Work that engages our whole self becomes play that works. Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:




Osterlund believes there are two key reasons microchips have taken off in Sweden. First, the country has a long history of embracing new technologies before many others and is quickly moving toward becoming a cashless society.

In the 1990s, the Swedish government invested in providing fast Internet services for its citizens and gave tax breaks to companies that provided their workers with home computers. And well-known tech names such as Skype and Spotify have Swedish roots.

"The more you hear about technology, the more you learn about technology, the less apprehensive you get about technology," Osterlund says.

Only 1 in 4 people living in Sweden uses cash at least once a week. And, according to the country's central bank, the Riksbank, the proportion of retail cash transactions has dropped from around 40 percent in 2010 to about 15 percent today.

Osterlund's second theory is that Swedes are less concerned about data privacy than people in other countries, thanks to a high level of trust for Swedish companies, banks, large organizations and government institutions.

Swedes are used to sharing personal information, with many online purchases and administrative bodies requiring their social security numbers. Mobile phone numbers are widely available in online search databases, and people can easily look up each other's salaries by calling the tax authority.

Thousands Of Swedes Are Inserting Microchips Under Their Skin




The number of people getting DNA reports has been doubling, roughly, every year since 2010. The figures are now growing by a million each month, and the DNA repositories are so big that they’re enabling surprising new applications. Consumers are receiving scientific predictions about whether they’ll go bald or get cancer. Investigators this year started using consumer DNA data to capture criminals. Vast gene hunts are under way into the causes of insomnia and intelligence. And 23andMe made a $300 million deal this summer with drug company GlaxoSmithKline to develop personalized drugs, starting with treatments for Parkinson’s disease. The notion is that targeted medicines could help the small subset of Parkinson’s patients with a particular gene error, which 23andMe can easily find in its database.

Look how far precision medicine has come




many of the moral principles that guide a driver’s decisions vary by country. For example, in a scenario in which some combination of pedestrians and passengers will die in a collision, people from relatively prosperous countries with strong institutions were less likely to spare a pedestrian who stepped into traffic illegally.

“People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules,” says Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge and a co-author of the study.

The debate about whether ethics are universal or vary between cultures is an old one, says Christakis, and now the “twenty-first century problem” of how to programme self-driving cars has reinvigorated it.

Self-driving car dilemmas reveal that moral choices are not universal





Are we ready for a full blown digital environment? Some people are readier than others.
Digiwell’s microchip implants run from $40 to $250, and Kramer charges $30 to inject them, either in his Hamburg office or while traveling (he did Geronimo’s implant in the lobby of a Berlin hotel). His clients include a lawyer who wants access to confidential files without remembering a password, a teen with no arms who uses a chip in her foot to open doors, and an elderly man with Parkinson’s disease who once collapsed in front of his house after trying for two hours to get his key into the lock. He now uses a chip in his hand to open the door.
Size of the chips sold by Digiwell for implanting: 12mm

Biohackers Are Implanting Everything From Magnets to Sex Toys

The human augmentation market may grow tenfold, to $2.3 billion, by 2025.
BOTTOM LINE - Biohacking advocates say 100,000 people have chips implanted under their skin, which they use to open doors, store passwords and personal data, and augment their art.
Patrick Kramer sticks a needle into a customer’s hand and injects a microchip the size of a grain of rice under the skin. “You’re now a cyborg,” he says after plastering a Band-Aid on the small wound between Guilherme Geronimo’s thumb and index finger. The 34-year-old Brazilian plans to use the chip, similar to those implanted in millions of cats, dogs, and livestock, to unlock doors and store a digital business card.

Kramer is chief executive officer of Digiwell, a Hamburg startup in what aficionados call body hacking—digital technology inserted into people. Kramer says he’s implanted about 2,000 such chips in the past 18 months, and he has three in his own hands: to open his office door, store medical data, and share his contact information. Digiwell is one of a handful of companies offering similar services, and biohacking advocates estimate there are about 100,000 cyborgs worldwide. “The question isn’t ‘Do you have a microchip?’  ” Kramer says. “It’s more like, ‘How many?’ We’ve entered the mainstream.”


Well Google Glass may not have become the next tablet - but neither did it go away (it’s become a manufacturing aid) - and this may not either (especially given the cost) - that said - augmented reality is … getting ever closer to a daily tool. The images are worth the view - they give clear indications of use and style.
"Focals are a pair of everyday smart glasses that are designed from the eyewear-first perspective," Stephen Lake, North's CEO and co-founder, told Engadget. "We realized that it can't be like previous approaches of smart glasses, where they try to stick a computer or tech on your face."

Custom-made smart glasses pick up where Google Glass left off

Focals look like something you might actually wear.
Earlier this month, Thalmic Labs announced it would be ending the production of Myo, a gesture-controlled armband that it's been developing for the past few years. The company has now changed its name to North and have decided to shift focus to an entirely different project. Today, it's finally ready to reveal what that project is. It's called Focals, a pair of smart glasses that uses holographic display technology.

Focals are designed to take your head away from being buried in your phone. "The core philosophy of the product is about keeping you present in the world," said Lake. "It's subtle and designed around the human experience."

One of the ways it achieves this is by relegating all the controls to a ring you wear on your index finger. The ring is called the Loop, and it has a joystick as well as a little D-pad (That's why the Loop needs to be on your index finger, so you can use your thumb to manipulate the joystick).

"Some of the previous products with a tech-first approach, they were putting touchpads on your head," said Lake, an example being Google Glass. "And it was just awkward and weird, to be tapping at your glasses and talking to yourself during a meeting." Using Loop, on the other hand, is very discreet. You could be going through your messages with your hand in your pocket, without anyone ever noticing.


This is a very interesting signal - how difficult it can be to track trend to predict the future of new technology. It could be that augmented reality headsets could be a necessary precursor to virtual reality - or it could be that the ‘killer app’ for VR is education rather than gaming. What is clear is that technology to be truly disruptive has to be cheap and have a key use. Is VR really dying? I don’t think so - but it may take much longer to become ubiquitous - until a great use case (and related content) is developed.
In my opinion — as someone who watched this new generation of virtual reality emerge from the earliest days, and was one of its biggest fans — VR adoption will only happen when the barrier to entry is akin to slipping on a pair of sunglasses (and even then it’s no sure thing). Most people don’t want to wear a bulky headset, even in private, there’s no must have “killer app” for VR, and no one has made a simple plug-and-play option that lets a novice user engage casually.

The virtual reality dream is dying

We were promised better worlds, and all we got was this lousy headset.
Hilmar Veigar Pétursson, CEO of the studio CCP Games (responsible for the massive Eve: Online), says the company doesn't see much of a future for virtual reality gaming. “We expected VR to be two to three times as big as it was, period,” Pétursson told gaming site Destructoid just a few days ago, adding, “You can't build a business on that.” There's a chance the company could jump back in, but their own data showed adoption was slow, even among enthusiasts. “The important thing is we need to see the metrics for active users of VR. A lot of people bought headsets just to try it out. How many of those people are active? We found that in terms of our data, a lot of users weren't,” Pétursson said.

So what happened here? VR was supposed to be a revolution, with companies like Oculus pioneering a whole new way for gamers and non-gamers alike to be immersed in digital environments — but that excitement has markedly cooled. The media (yes, me included, at least early on) has gone through several cycles of fawning, optimistic prognostication, and... wishful thinking? — but for all the hype we have very little consumer interest to show for it. Oculus sold off to Facebook and has become little more than a parlor trick Mark Zuckerberg shows off at every FB event. As Ben Thompson recently noted, the bet on the company is an awkward fit for Facebook that strays from Zuckerberg’s strengths in several ways:


Talking about learning - this is an excellent 1 hour video presentation by the key author of the concept of ‘communities of practice’. This is vital for anyone concerned with knowledge generation, transfer, management and governance.

Dr Etienne Wenger: Learning in landscapes of practice

Dr Etienne Wenger presented 'Learning in landscapes of practice: recent developments in social learning theory' on Wednesday 1 May 2013 as part of the Festival of Research in the Brighton Fringe.

Learning is often viewed as something individuals do as they acquire information and skills. It is usually associated with some form of instruction. Dr Wenger presents a different perspective on learning, one that starts with the assumption that learning is an inherent dimension of everyday life and that it is fundamentally a social process. From this perspective, a living "body of knowledge" can be viewed as collection of communities of practice. Learning is not merely the acquisition of a curriculum, but a journey across this landscape of practice, which is transformative of the self. Achieving a high level of "knowledgeability" is a matter of negotiating a productive identity with respect to the various communities of practice that constitute this landscape. This lecture reviews the main tenets of this learning theory, the ways in which it has been used in practice, and more recent developments.


The self-driving vehicle is coming ever closer to becoming a common feature of our transportation reality.
California already has more than 50 companies testing autonomous vehicles on its public roads, but at the current time, they all include safety drivers.

Waymo receives first permit to test fully driverless cars in California

Waymo’s plan for a robot-taxi service has just taken another big step forward after the company became the first in California to receive approval for testing fully driverless cars on the state’s roads.

It announced the news on Tuesday, October 30, after the California Department of Motor Vehicles (DMV) gave it the green light to test its self-driving cars without the need for a safety driver.
In a blog post, the autonomous-car unit of Google parent Alphabet said it will start off by testing its vehicles in the streets around Mountain View, California, close to its headquarters.

“Waymo’s permit includes day and night testing on city streets, rural roads, and highways with posted speed limits of up to 65 miles per hour,” the team said in the post.
It added that its permit also allows for driving in fog and light rain, conditions that its autonomous cars can already comfortably handle.

In a bid to reassure local drivers that safety is its top priority, Waymo said that should one of its driverless vehicles come across a situation that it’s unable to comprehend, it will do “what any good driver would do: Come to a safe stop until it does understand how to proceed. For our cars, that means following well-established protocols, which include contacting Waymo fleet and rider support for help in resolving the issue.”


Another signal that marks the inevitable transformation of our transportation paradigm. Essentially the electric vehicle will soon surpass the carbon fuel vehicle - for cost and range.

'Ultra rapid' electric car charging network coming to Australia

Chargefox stations will allow drivers to charge electric vehicles in just minutes
Drivers travelling between Australia’s major cities could soon charge their electric vehicles in just 15 minutes with a super-fast network being rolled out across the country.

The 21 sites on highways between Adelaide, Melbourne, Canberra, Sydney and Brisbane will be powered entirely by renewable energy. Sites are also planned for Western Australia.

The “ultra rapid” stations will allow electric vehicles to add up to 400km of range in a fraction of the hours it takes to charge at existing points.

Australian start-up Chargefox raised $15m to start building the network, including $6m from the federal government’s Australian Renewable Energy Agency.
Arena’s chief executive, Darren Miller, said it was a “game changing” project that would help ease worries about range and encourage more people to drive electric vehicles.


A new technique for chemical analysis represents a significant breakthrough - and could accelerate our knowledge of living systems.
“I am blown away by this,” says Carolyn Bertozzi, a chemist at Stanford University in Palo Alto, California. “The fact that you can get these structures from [a sample] a million times smaller than a speck of dust, that’s beautiful. It’s a new day for chemistry.”

‘A new day for chemistry’: Molecular CT scan could dramatically speed drug discovery

This week, two research teams report they’ve adapted a third technique, commonly used to chart much larger proteins, to determine the precise shape of small organic molecules. The new technique works with vanishingly small samples, is blazing fast, and is surprisingly easy.

Because it does work so smoothly, the new technique could revolutionize fields both inside and outside of research, Bertozzi and others say. Tim Grüne, an electron diffraction expert at the Paul Scherrer Institute in Villigen, Switzerland, who led the European group, notes that pharmaceutical companies build massive collections of crystalline compounds, in which they hunt for potential new drugs. But only about one-quarter to one-third of the compounds form crystals big enough for x-ray crystallography. “This will remove a bottleneck and lead to an explosion of structures,” Grüne says. That could speed the search for promising drug leads in tiny samples of exotic plants and fungi. For crime labs, it could help them quickly identify the latest heroin derivatives hitting the streets. And it could even help Olympics officials clean up sports by making it easier to spot vanishingly small amounts of performance-enhancing drugs. All because structures rule—and are now easier than ever to decipher.


Here’s a great signal of a development of domesticating DNA - not only is there work on developing microorganisms to metabolize current types of plastic - but also this initiative is developing similar microorganisms to make truly biodegradable plastics from compost.
The company has signed agreements with Ontario firms that plan to use it to make compostable coffee pods and the plastic that's printed out by 3D printers.
PHA is already on the market. Because it's biocompatible and biodegradable, it's used in lots of medical applications ranging from heart valves to dissolving sutures.

Greener coffee pods? Bacteria help turn food waste into compostable plastic

Toronto-based cleantech startup uses 'the fat of the bacteria'
What if plastic were made from waste like banana peels, coffee grounds and cardboard takeout containers instead of petroleum? And what if, after use, that plastic decomposed like the biological materials it was made from?

Toronto-based Genecis, a cleantech startup,  is trying to make that dream of greener plastic a reality, and to make it cheap enough to use in everyday throwaway items like coffee pods and other food packaging.

Genecis harnesses bacteria to turn kitchen waste into compostable, biocompatible plastics called PHAs (polyhydroxyalkanoates).

The plastic-making bacteria eat waste that has been pre-processed by other bacteria into molecular bite-sized pieces. And, like us, if they're well fed, they pack on some extra weight — strangely enough, as plastic.
"It's like the fat of the bacteria," explains Luna Yu, the company's 24-year-old CEO.


This is another fascinating signal regarding our microbial ecology.
"When we mapped the genome of Bacteroides fragilis a few years ago we were astonished to discover a human-like gene not present in any other bacteria. The protein produced from this gene is nearly the same shape as a protein in almost every human cell."

"When we discovered that Bacteroides fragilis produces lots of this mimic protein we were very excited. No other bacteria produced a mimic of human ubiquitin and this one lives in our gut. We immediately wondered if it might be linked with autoimmune diseases such as lupus. It has been known since the 1990s that some people with autoimmune diseases have antibodies that target their own human ubiquitin, but we don't know why this happens. So we decided to see if people also had antibodies that target the Bacteroides fragilis version of ubiquitin."

Ground-breaking discovery finds new link between autoimmune diseases and a gut bacterium

Queen's University Belfast researchers have, for the first time, found a specific microbe in the gut that pumps out protein molecules that mimic a human protein, causing the human defence system to turn on its own cells by mistake.

The culprit in this case is called Bacteroides fragilis, a bacterium that normally lives in the human gut. The Queen's team has shown that this bacterium produces a human-like protein that could trigger autoimmune diseases, such as rheumatoid arthritis. This human protein is called 'ubiquitin' and is needed for all the normal cell processes in our bodies

The study, recently published in the British Society for Immunology journal Clinical and Experimental Immunology is a significant discovery. "Mimic proteins" fool our immune defence system into reacting with our own bodies, resulting in autoimmune disease, a condition in which your immune system mistakenly attacks the body.


This is an worthy signal of the emerging power of AI to hover around the Turing test - or some proxies of a Turing test. The images of created works are worth the view - they are fascinating - we don’t know what art is - but many recognize it when they see it - even if they don’t know the creator.

75% of people think this AI artist is human

One piece recently sold for $16,000 at an auction.
When artificial intelligence has been used to create works of art, a human artist has always exerted a significant element of control over the creative process.

But what if a machine were programmed to create art on its own, with little to no human involvement? What if it were the primary creative force in the process? And if it were to create something novel, engaging, and moving, who should get credit for this work?

At Rutgers’ Art & AI Lab, we created AICAN, a program that could be thought of as a nearly autonomous artist that has learned existing styles and aesthetics and can generate innovative images of its own.

People like AICAN’s work, and can’t distinguish it from that of human artists. Its pieces have been exhibited worldwide, and one even recently sold for $16,000 at an auction.


Now the practice of law may seem antithetical to the creative practice of art - but on the other hand - creating compelling and novel arguments for or against may require similar doses of creative talent - here again we may be approaching a proxy of the Turing Test.
most of the participants stressed that high-volume and low-risk contracts took up too much of their time, and felt it was incumbent on lawyers to automate work when, and where, possible. For them, the study was also a simple, practical demonstration of a not-so-scary AI future. However, lawyers also stressed that undue weight should not be put on legal AI alone. One participant, Justin Brown, stressed that humans must use new technology alongside their lawyerly instincts. He says: “Either working alone is inferior to the combination of both.”

20 top lawyers were beaten by legal AI. Here are their surprising responses

In a landmark study, 20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.

The study, carried out with leading legal academics and experts, saw the LawGeex AI achieve an average 94% accuracy rate, higher than the lawyers who achieved an average rate of 85%. It took the lawyers an average of 92 minutes to complete the NDA issue spotting, compared to 26 seconds for the LawGeex AI. The longest time taken by a lawyer to complete the test was 156 minutes, and the shortest time was 51 minutes. The study made waves around the world and was covered across global media.


Is it going to become much harder to lie with our written words? An interesting signal.

Police are using artificial intelligence to spot written lies

There’s no foolproof way to know if someone’s verbally telling lies, but scientists have developed a tool that seems remarkably accurate at judging written falsehoods. Using machine learning and text analysis, they’ve been able to identify false robbery reports with such accuracy that the tool is now being rolled out to police stations across Spain.

Computer scientists from Cardiff University and Charles III University of Madrid developed the tool, called VeriPol, specifically to focus on robbery reports. In their paper, published in the journal Knowledge-Based Systems earlier this year, they describe how they trained a machine-learning model on more than 1000 police robbery reports from Spanish National Police, including those that were known to be false. A pilot study in Murcia and Malaga in June 2017 found that, once VeriPol identified a report as having a high probability of being false, 83% of these cases were closed after the claimants faced further questioning. In total, 64 false reports were detected in one week.

VeriPol works by using algorithms to identify the various features in a statement, including all adjectives, verbs, and punctuations marks, and then picking up on the patterns in false reports. According to a Cardiff University statement, false robbery reports are more likely to be shorter, focused on the stolen property rather than the robbery itself, have few details about the attacker or the robbery, and lack witnesses.

Taken together, these sound like common-sense characteristics that humans could recognize. But the AI proved more effective at unemotionally scanning reports and identifying patterns, at least compared to historical data: Typically, just 12.14 false reports are detected by police in a week in June in Malaga, and 3.33 in Murcia.


Jeremy Rifkin has written about the emerging ‘Zero Marginal Cost Society’ - which is a very important signal for the need to develop a different social-economic-political paradigm to displace the current efforts of artificial scarcity. This is an important signal confirming that emerging new energy economic geopolitics. One caveat - the article should be talking about energy not simple solar energy

Is Australia on the verge of having too much solar energy?

Solar will represent a very substantial part of our power supply, but we’re hardly at risk of generating too much. Here’s why
Over the last few weeks there have been a number of reports in the media that Australia is on the verge of hitting too much solar energy.

This includes claims by some electricity generators that we are heading towards a “solar peak” – a point at which “there is no point in putting any more solar power into the system” because it will just be spilled and wasted.

Some are claiming it might even cause blackouts. Andrew Dillon, head of the Energy Networks Association, told the ABC’s 7.30 Report solar was likely to cause, “voltage disturbances in the system which will lead to transformers and other equipment tripping off to protect themselves from being damaged and that will cause localised blackouts.”

This is all occurring within a furious battle over a recommendation by the Australian Competition and Consumer Commission that by 2021 the federal government should remove the rebate provided to solar systems under the small-scale renewable energy scheme (SRES). So are we faced with a serious problem of there being too much solar which means we should scrap the rebate?


This is an interesting signal - a warning to watch as more devices become integrated with the Internet-of-Things.
Apple acknowledged in December that it had intentionally slowed iPhones with degraded batteries through software updates to avoid sudden shutdown problems, but denied it had ever done anything to intentionally shorten the life of a product.
The company later apologised for its actions and reduced the cost of battery replacements. It also added battery health information to iOS and allowed users to turn off the slowing down of the iPhone’s processor.

Apple and Samsung fined for deliberately slowing down phones

Italian investigation found software updates ‘significantly reduced performance’, hastening new purchases
Apple and Samsung are being fined €10m and €5m respectively in Italy for the “planned obsolescence” of their smartphones.

An investigation launched in January by the nation’s competition authority found that certain smartphone software updates had a negative effect on the performance of the devices.

Believed to be the first ruling of its kind against smartphone manufacturers, the investigation followed accusations operating system updates for older phones slowed them down, thereby encouraging the purchase of new phones.
In a statement the antitrust watchdog said “Apple and Samsung implemented dishonest commercial practices” and that operating system updates “caused serious malfunctions and significantly reduced performance, thus accelerating phones’ substitution”.

It added the two firms had not provided clients adequate information about the impact of the new software “or any means of restoring the original functionality of the products”.


Halloween Pumpkins
This year I only did five pumpkins for Halloween - picture (which don’t do them justice by a great distance) are here