Thursday, January 18, 2018

Friday Thinking 19 Jan. 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Content
Quotes:

Articles:




... Deep Learning is in fact the stepping stone tool that other cognitive tools will leverage to achieve higher levels of cognition. We’ve already seen this in DeepMind’s AlphaZero playing systems where conventional tree search is used in conjunction with Deep Learning. Deep Learning is the wheel of cognition. Just as the wheel enabled more effective transportation, so will Deep Learning achieve effective artificial intelligence.

Politics in science has always been present and its not going to disappear any time soon. We are familiar with the feud between Nicholai Tesla and Thomas Edison. Edison died a wealthy man, in stark contrast to Tesla who died penniless. Yet the scientific contributions of Tesla arguable surpasses Edisons. Yet, Edison is famous today and Tesla is likely only well known because an electric car company is named after him.

The Boogeyman Argument that Deep Learning will be Stopped by a Wall




IN 2015, WHEN Lazarus Liu moved home to China after studying logistics in the United Kingdom for three years, he quickly noticed that something had changed: Everyone paid for everything with their phones. At McDonald’s, the convenience store, even at mom-and-pop restaurants, his friends in Shanghai used mobile payments. Cash, Liu could see, had been largely replaced by two smartphone apps: Alipay and WeChat Pay. One day, at a vegetable market, he watched a woman his mother’s age pull out her phone to pay for her groceries. He decided to sign up.

To get an Alipay ID, Liu had to enter his cell phone number and scan his national ID card. He did so reflexively. Alipay had built a reputation for reliability, and compared to going to a bank managed with slothlike indifference and zero attention to customer service, signing up for Alipay was almost fun. With just a few clicks he was in. Alipay’s slogan summed up the experience: “Trust makes it simple.”

Alipay turned out to be so convenient that Liu began using it multiple times a day, starting first thing in the morning, when he ordered breakfast through a food delivery app. He realized that he could pay for parking through Alipay’s My Car feature, so he added his driver’s license and license plate numbers, as well as the engine number of his Audi. He started making his car insurance payments with the app. He booked doctors’ appointments there, skipping the chaotic lines for which Chinese hospitals are famous. He added friends in Alipay’s built-in social network. When Liu went on vacation with his fiancée (now his wife) to Thailand, they paid at restaurants and bought trinkets with Alipay. He stored whatever money was left over, which wasn’t much once the vacation and car were paid for, in an Alipay money market account. He could have paid his electricity, gas, and internet bills in Alipay’s City Service section. Like many young Chinese who had become enamored of the mobile payment services offered by Alipay and WeChat, Liu stopped bringing his wallet when he left the house.

INSIDE CHINA'S VAST NEW EXPERIMENT IN SOCIAL RANKING




real cities are their own reasons for existing. If it only exists to serve a function in the broader world, it's a town, not a city. What real cities do -- whether finance, or tech, or energy, or governance -- is not who they are. The history of any major city illustrates this. San Francisco was about the gold rush before it was about tech. Seattle was about fur, fish, and lumber before it was about Boeing or Amazon. New York was about textiles before it was about high finance. And all of them are always, first and foremost, about themselves. About their unique psychological identities.

The current global-macroeconomy "job" of a city has only a weak correlation with its essential nature.

In fact, it is cities, not nations, that best fit the formula "Make X Great Again." Great cities are longer-lived than nations and empires. Often they are effectively immortal. When nations experience multiple chapters of greatness, it is usually traceable to chapters of greatness in one or more of their great, immortal cities. Long-lived cities are make-ourselves-great-again engines. They keep finding new ways of continuing the game of being themselves rather than trying to win a particular economic era (ie for you Carseans out there, great cities are infinite-game players).

And greatness is never about the jobs being done at any given time, whether you're talking individuals, cities, or nations.

Why Cities Fail






This is a clear signal of the emerging change in energy geopolitics - a longish article - but worth it for anyone interested in the change in conditions of change.
“What I like about the job is that it is about much more than energy,” says Šefčovič. “It came from [European Commission president] Juncker’s idea to have a much more horizontal, cross-cutting approach in energy policy, which means I am working with a team of fourteen commissioners in my Energy Union portfolio. If I have to sum up what the job is about: it is making sure that we are energizing and modernizing the European economy.”

Interview with Maros Šefčovič: Energy Union is “Deepest Transformation of Energy Systems Since Industrial Revolution”

Before the next European elections in 2019, Maroš Šefčovič , the European Commission’s Vice-President for the Energy Union, wants to have a new legal framework in place which will “bring in the most comprehensive and deepest transformation of energy systems in Europe, since the [industrial revolution] one hundred and fifty years ago.” In an exclusive interview with Energy Post, he says that the success of the Energy Union project “will decide the place of Europe on the geopolitical and economic map of the 21st Century”. Renewables, decentralized energy, digitalization and smart grids will be the “backbone of the new modern economy in Europe.” On the controversy over Nord Stream 2, Gazprom’s pipeline project, Šefčovič says he wants to resolve the issue through “negotiations”.


This is a very long read - but also worth it for anyone interested in a seasoned, intelligent foresight analysis by Rodney Brookes.

MY DATED PREDICTIONS

With all new technologies there are predictions of how good it will be for humankind, or how bad it will be. A common thread that I have observed is how people tend to underestimate how long new technologies will take to be adopted after proof of concept demonstrations. I pointed to this as the seventh of seven deadly sins of predicting the future of AI.

For example, recently the early techno-utopianism of the Internet providing a voice to everyone and thus blocking the ability of individuals to be controlled by governments has turned to depression about how it just did not work out that way. And there has been discussion of how the good future we thought we were promised is taking much longer to be deployed than we had ever imagined. This is precisely a realization of the early optimism about how things would be deployed and used did just not turn out to be.

Over the last few months I have been throwing a little cold water over what I consider to be current hype around Artificial Intelligence (AI) and Machine Learning (ML). However, I do not think that I am a techno-pessimist. Rather, I think of myself as a techno-realist…...


This is interesting an analysis pointing to the 'unfixable' nature of broken platforms - because of the fundamental nature of their business models - the focus is on FB - but it much the same for all platforms - except Wikimedia - imagine if Facebook had chosen to become a foundation the way Wikimedia has?
We love to think our corporate heroes are somehow super human, capable of understanding what’s otherwise incomprehensible to mere mortals like the rest of us. But Facebook is simply too large an ecosystem for one person to fix. And anyway, his hands are tied from doing so. So instead, he’s doing what people (especially engineers) always do when the problem is so existential they can’t wrap their minds around it: He’s redefining the problem and breaking it into constituent parts.

Facebook Can’t Be Fixed.

Facebook’s core problem is not foreign interference, spam bots, trolls, or fame mongers. It’s the company’s core business model, and abandoning it is not an option.
In his short but impactful post, Zuckerberg notes that when he started doing personal challenges in 2009, Facebook did not have “a sustainable business model,” so his first pledge was to wear a tie all year, so as to focus himself on finding that model.

He sure as hell did find that model: data-driven audience-based advertising, but more on that in a minute. In his post, Zuckerberg notes that 2018 feels “a lot like that first year,” adding “Facebook has a lot of work to do — whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent….My personal challenge for 2018 is to focus on fixing these important issues.”

The post is worthy of a doctoral dissertation. I’ve read it over and over, and would love, at some point, to break it down paragraph by paragraph. Maybe I’ll get to that someday, but first I want to emphatically state something it seems no one else is saying (at least not in mainstream press coverage of the post):
You cannot fix Facebook without completely gutting its advertising-driven business model.


This is interesting - it could be a weak signal for the emergence of new institutions - like Auditor Generals of Algorithms.
I don't think it is possible to guarantee prevention of harm in our algorithms - for the same reason that one can't presate the phase space of the future evolution of the biosphere and thus can't predict the future of evolution.
But what we can do - and have always done - is build robust, response-able, timely recourse systems to address and correct.
“The topics that we’re concerned with are how do you scrutinise an algorithm; how do you hold an algorithm accountable when it’s making very important decisions that actually affect the experiences and life outcomes of people,” Suleyman says. “We want these systems in production to be our highest collective selves. We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last sixty years.”

DeepMind's new AI ethics unit is the company's next big move

Google-owned DeepMind has announced the formation of a major new AI research unit comprised of full-time staff and external advisors
As we hand over more of our lives to artificial intelligence systems, keeping a firm grip on their ethical and societal impact is crucial. For DeepMind, whose stated mission is to “solve intelligence”, that task will be the work of a new initiative tackling one of the most fundamental challenges of the digital age: technology is not neutral.
DeepMind Ethics & Society (DMES), a unit comprised of both full-time DeepMind employees and external fellows, is the company’s latest attempt to scrutinise the societal impacts of the technologies it creates. In development for the past 18 months, the unit is currently made up of around eight DeepMind staffers and six external, unpaid fellows. The full-time team within DeepMind will swell to around 25 people within the next 12 months.

Headed by technology consultant Sean Legassick and former Google UK and EU policy manager and government adviser Verity Harding, DMES will work alongside technologists within DeepMind and fund external research based on six areas: privacy transparency and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world’s challenges. Within those broad themes, some of the specific areas addressed will be algorithmic bias, the future of work and lethal autonomous weapons. Its aim, according to DeepMind, is twofold: to help technologists understand the ethical implications of their work and help society decide how AI can be beneficial.

For DeepMind co-founder Mustafa Suleyman, it’s a significant moment. “We’re going to be putting together a very meaningful team, we’re going to be funding a lot of independent research,” he says when we meet at the firm’s London headquarters. Suleyman is bullish about his company’s efforts to not just break new frontiers in artificial intelligence technology, but also keep a grip on the ethical implications. “We’re going to be collaborating with all kinds of think tanks and academics. I think it’s exciting to be a company that is putting sensitive issues, proactively, up-front, on the table, for public discussion.”

To explain where the idea for DMES came from, Suleyman looks back to before the founding of DeepMind in 2010. “My background before that was pretty much seven or eight years as an activist,” he says. An Oxford University drop-out at the age of 19, Suleyman went on to found a telephone counselling service for young Muslims before working as an advisor to then Mayor of London Ken Livingstone, followed by spells at the UN, the Dutch government and WWF. He explains his ambition thusly, “How do you get people who speak very different social languages to put purpose ahead of profit in the heart of their organisations and coordinate effectively?”


The use of big data, and AI is emerging as new forms of diagnostic tools - this one is cheaper, easier and better.
“To see in many dimensions at the same time is very difficult. With machine learning, it becomes easy”

Colonoscopy? How About a Blood Test?

Israel’s Medial offers less-invasive, more data-driven assessments to resistant patients.
An Israeli health-tech company is trying to use machine learning software to do just that. ColonFlag is the first product from Medial EarlySign, and while poorly named, the software predicts colon cancer twice as well as the fecal exam that’s the industry-standard colonoscopy alternative, according to a 2016 study published in the Journal of the American Medical Informatics Association. ColonFlag compares new blood tests against a patient’s previous diagnostics, as well as Medial’s proprietary database of 20 million anony-​mized tests spanning three decades and three continents, to evaluate the patient’s likelihood of harboring cancer. Israel’s second-largest health maintenance organization is already using the software, and Medial (a mashup of “medical” and “algorithms”) is working with Kaiser Permanente and two leading U.S. hospitals to develop other uses for its database and analysis tools.

“Our algorithms can automatically scan all the patient parameters and detect subtle changes over time to find correlative patterns for outcomes we want to predict,” Nir Kalkstein, Medial’s co-founder and chief technology officer, says, characteristically clinical. The database allows his team “to find similar events in the past and then identify from the data correlations that can predict these events.”

Other companies are building massive databases with an eye toward predictive medicine, including heavy hitters such as DeepMind Technologies, owned by Google parent Alphabet Inc. In Boulder, Colo., startup SomaLogic Inc. is predicting heart attacks based on combinations of certain proteins in cardiac patients. In Salt Lake City, Myriad Genetics Inc.assesses hereditary cancer risks based on DNA profiles.


Moore’s Law is Dead - Long Live Moore’s Law. This is a strong signal of the emerging computational paradigms that include memristors, quantum, and DNA just to name a few. There’s a 1 min video.

Intel’s New Self-Learning Chip Promises to Accelerate Artificial Intelligence

Intel Introduces First-of-Its-Kind Self-Learning Chip Codenamed Loihi
Imagine a future where complex decisions could be made faster and adapt over time. Where societal and industrial problems can be autonomously solved using learned experiences.

It’s a future where first responders using image-recognition applications can analyze streetlight camera images and quickly solve missing or abducted person reports.
It’s a future where stoplights automatically adjust their timing to sync with the flow of traffic, reducing gridlock and optimizing starts and stops.

It’s a future where robots are more autonomous and performance efficiency is dramatically increased.

An increasing need for collection, analysis and decision-making from highly dynamic and unstructured natural data is driving demand for compute that may outpace both classic CPU and GPU architectures. To keep pace with the evolution of technology and to drive computing beyond PCs and servers, Intel has been working for the past six years on specialized architectures that can accelerate classic compute platforms. Intel has also recently advanced investments and R&D in artificial intelligence (AI) and neuromorphic computing.


This is an fascinating confirmation of some aspects of our biological foundations of our moral and social fabric.
"The findings give us a glimpse into what is the nature of morality," said Dr. Marco Iacoboni, director of the Neuromodulation Lab at UCLA's Ahmanson-Lovelace Brain Mapping Center and the study's senior author. "This is a foundational question to understand ourselves, and to understand how the brain shapes our own nature."

Mirror neuron activity predicts people's decision-making in moral dilemmas, study finds

It is wartime. You and your fellow refugees are hiding from enemy soldiers, when a baby begins to cry. You cover her mouth to block the sound. If you remove your hand, her crying will draw the attention of the soldiers, who will kill everyone. If you smother the child, you'll save yourself and the others.

If you were in that situation, which was dramatized in the final episode of the '70s and '80s TV series "M.A.S.H.," what would you do?

The results of a new UCLA study suggest that scientists could make a good guess based on how the brain responds when people watch someone else experience pain. The study found that those responses predict whether people will be inclined to avoid causing harm to others when facing moral dilemmas.

Iacoboni and his colleagues hypothesized that people who had greater neural resonance than the other participants while watching the hand-piercing video would also be less likely to choose to silence the baby in the hypothetical dilemma, and that proved to be true. Indeed, people with stronger activity in the inferior frontal cortex, a part of the brain essential for empathy and imitation, were less willing to cause direct harm, such as silencing the baby.

But the researchers found no correlation between people's brain activity and their willingness to hypothetically harm one person in the interest of the greater good—such as silencing the baby to save more lives. Those decisions are thought to stem from more cognitive, deliberative processes.

The study confirms that genuine concern for others' pain plays a causal role in moral dilemma judgments, Iacoboni said. In other words, a person's refusal to silence the baby is due to concern for the baby, not just the person's own discomfort in taking that action.


The human genome project began in 1990 intending to be a 15 year effort. Halfway through the project had sequence only 1% of the human genome. The power of learning exponential technology progress - enabled the project to be completed in 2003 - two years ahead of schedule.
The project to map the ‘Connectome’ is facing similar challenges and trajectories. The challenge of shifting to a team approach is increased by continued reliance on traditional siloed disciplines.
In July 2016, an international team published a map of the human brain's wrinkled outer layer, the cerebral cortex1. Many scientists consider the result to be the most detailed human brain-connectivity map so far. Yet, even at its highest spatial resolution (1 cubic millimetre), each voxel — the smallest distinguishable element of a 3D object — contains tens of thousands of neurons. That's a far cry from the neural connections that have been mapped at single-cell resolution in the fruit fly.

“In case you thought brain anatomy is a solved problem, take it from us — it isn't,” says Van Wedeen, a neuroscientist at Massachusetts General Hospital in Charlestown and a principal investigator for the Human Connectome Project (HCP), a US-government-funded global consortium that published the brain map.

Neuroscience: Big brain, big data

Neuroscientists are starting to share and integrate data — but shifting to a team approach isn't easy.
As big brain-mapping initiatives go, Taiwan's might seem small. Scientists there are studying the humble fruit fly, reverse-engineering its brain from images of single neurons. Their efforts have produced 3D maps of brain circuitry in stunning detail.

Researchers need only a computer mouse and web browser to home in on individual cells and zoom back out to intertwined networks of nerve bundles. The wiring diagrams look like colourful threads on a tapestry, and they're clear enough to show which cell clusters control specific behaviours. By stimulating a specific neural circuit, researchers can cue a fly to flap its left wing or swing its head from side to side — feats that roused a late-afternoon crowd in November at the annual meeting of the Society for Neuroscience in San Diego, California.

But even for such a small creature, it has taken the team a full decade to image 60,000 neurons, at a rate of 1 gigabyte per cell, says project leader Ann-Shyn Chiang, a neuroscientist at the National Tsing Hua University in Hsinchu City, Taiwan — and that's not even half of the nerve cells in the Drosophila brain. Using the same protocol to image the 86 billion neurons in the human brain would take an estimated 17 million years, Chiang reported at the meeting.

But brain mapping and DNA sequencing are different beasts. A single neuroimaging data set can measure in the terabytes — two to three orders of magnitude larger than a complete mammalian genome. Whereas geneticists know when they've finished decoding a stretch of DNA, brain mappers lack clear stopping points and wrestle with much richer sets of imaging and electrophysiological data — all the while wrangling over the best ways to collect, share and interpret them. As scientists develop tools to share and analyse ever-expanding neuroscience data sets, however, they are coming to a shared realization: cracking the brain requires a concerted effort.


Another signal in the emerging transformation of energy geopolitics.

Britain Now Generates Twice as Much Electricity from Wind as Coal

Just six years ago, more than 40% of Britain’s electricity was generated by burning coal. Today, that figure is just 7%. Yet if the story of 2016 was the dramatic demise of coal and its replacement by natural gas, then 2017 was most definitely about the growth of wind power,

Wind provided 15% of electricity in Britain last year (Northern Ireland shares an electricity system with the Republic and is calculated separately), up from 10% in 2016. This increase, a result of both more wind farms coming online and a windier year, helped further reduce coal use and also put a stop to the rise in natural gas generation.


This is a good signal in the progress of domesticating DNA and in fighting climate change.
"This could be an important breakthrough in biotechnology. It should be possible to optimise the system still further and finally develop a `microbial cell factory' that could be used to mop up carbon dioxide from many different types of industry.
"Not all bacteria are bad. Some might even save the planet."

A biological solution to carbon capture and recycling?

E. coli bacteria shown to be excellent at CO2 conversion
Scientists at the University of Dundee have discovered that E. coli bacteria could hold the key to an efficient method of capturing and storing or recycling carbon dioxide.

Professor Frank Sargent and colleagues at the University of Dundee's School of Life Sciences, working with local industry partners Sasol UK and Ingenza Ltd, have developed a process that enables the E. coli bacterium to act as a very efficient carbon capture device.

"For example, the E. coli bacterium can grow in the complete absence of oxygen. When it does this it makes a special metal-containing enzyme, called 'FHL', which can interconvert gaseous carbon dioxide with liquid formic acid. This could provide an opportunity to capture carbon dioxide into a manageable product that is easily stored, controlled or even used to make other things. The trouble is, the normal conversion process is slow and sometime unreliable.

"What we have done is develop a process that enables the E. coli bacterium to operate as a very efficient biological carbon capture device. When the bacteria containing the FHL enzyme are placed under pressurised carbon dioxide and hydrogen gas mixtures -- up to 10 atmospheres of pressure -- then 100 per cent conversion of the carbon dioxide to formic acid is observed. The reaction happens quickly, over a few hours, and at ambient temperatures.


The link between our microbial profile, our diet and our conditions continues to unfold. This article is worth reading.

Treating Disease by Nudging the Microbes Inside Us

We’ve spent centuries trying to kill bacteria. Now, scientists have shown that subtler approaches can work—at least in mice.
the links between microbes and poor health can be more complicated. Our bodies are naturally home to tens of trillions of bacteria. Most are benign, or even beneficial. But often, these so-called microbiomes can shift into a negative state. For example, inflamed guts tend to house an unusually large number of bacteria from the Enterobacteriaceae family (pronounced En-ter-oh-back-tee-ree-ay-see-ay, and hereafter just “enteros”). There’s no villain in this scenario, no single antagonist as there would be in the case of tuberculosis or cholera. The enteros are part of a normal gut; it’s the same old community, just altered.

These kinds of shifts are harder to rectify. For a start, it’s often unclear if the enteros cause the inflammation, if the inflammation changes the microbes, or both. Even if the microbes are responsible, how do you fix that? Dietary changes are typically too imprecise. Antibiotics are too crude, killing off beneficial microbes while suppressing the problematic ones.

But Sebastian Winter, from the University of Texas Southwestern Medical Center, has an alternative. His team showed that the blooming enteros rely on enzymes that, in turn, depend on the metal molybdenum. A related metal—tungsten—can take the place of molybdenum, and stop those enzymes from working properly.

By feeding mice small amounts of tungsten salts, Winter’s team managed to specifically prevent the growth of enteros, while leaving other microbes unaffected. Best of all, the tungsten treatment spared the enteros under normal conditions, suppressing them only in the context of an inflamed gut. It’s a far more precise and subtle way of changing the microbiome than, say, blasting it with antibiotics. It involves gentle nudges rather than killing blows.


New ‘wonders of the world’ in the making. The 2.5 min video explains it all.

Casting a $20 Million Mirror for the World’s Largest Telescope

The glass arcs that will let astronomers peer back millions of years are decades in the making
Building a mirror for any giant telescope is no simple feat. The sheer size of the glass, the nanometer precision of its curves, its carefully calculated optics, and the adaptive software required to run it make this a task of herculean proportions. But the recent castings of the 15-metric ton, off-axis mirrors for the Giant Magellan Telescope (GMT) forced engineers to push the design and manufacturing process beyond all previous limits.

Building the GMT is not a task of years, but of decades. The Giant Magellan Telescope Organization (GMTO) and a team at the University of Arizona's Richard F. Caris Mirror Laboratory cast the first of seven mirrors back in 2005; they expect to complete construction of the telescope in 2025. Once complete, it’s expected to be the largest telescope in the world. The seven 8.4-meter-wide mirrors will combine to serve as a 24.5-meter mirror telescope with 10 times the resolution of the Hubble Space Telescope. This will allow astronomers to gaze back in time to, they hope, the materialization of galaxies.


Optical and Olfactory Illusions? This is another fun challenge to the notion of our capacity to perceive an objective world.
"The Skittles people, being much smarter than most of us, recognized that it is cheaper to make things smell and look different than it is to make them actually taste different."
Katz continues: "So, Skittles have different fragrances and different colors — but they all taste exactly the same."

Are Gummy Bear Flavors Just Fooling Our Brains?

Don Katz is a Brandeis University neuropsychologist who specializes in taste. "I have a colleague in the U.K., Charles Spence, who did the most wonderful experiment," Katz says. "He took normal college students and gave them a row of clear beverages in clear glass bottles. The beverages had fruit flavorings. One was orange, one was grape, apple, lemon."

Spence, who is also the author of the 2017 book Gastrophysics: The New Science of Eating, says he has "always been interested in how the senses affect one another" and conducted the experiment because "there is perhaps nothing more multisensory than flavor perception."

According to Katz, the college students did a great job of differentiating between the flavors of the clear liquid.
"But then he added food coloring," Katz says. "The 'wrong' food coloring for the liquid."

So, the grape-flavored liquid was then colored orange, for example.
"While I wouldn't say they went to chance, their ability to tell which was which got really subpar all of the sudden," Katz says. "The orange beverage tasted orange [to them]. The yellow beverage tasted like lemonade. There wasn't a thing they could do about it."

It was so powerful that even when Spence told the students that it was his job as a scientist to mess with the conditions and asked them to just tell him what they tasted without considering the color, they still couldn't do it.


And here’s some good news for carnivores and bacon-lovers.

U.K. firm touts bacon breakthrough

Recipe uses fruit, spice extracts in place of cancer-linked nitrites
There’s good news for people whose New Year’s resolution is to eat more bacon. This week, the British sausage maker Finnebrogue will introduce bacon rashers made without added nitrites, which have been linked to cancer.

The new English breakfast component will be available only to shoppers in the U.K., who consume, on average, more than 3 kg of bacon per year. Finnebrogue says its Naked Bacon is made with natural fruit and spice extracts.

The firm spent 10 years and $19 million to develop the recipe with the Spanish food ingredients firm Prosur. It claims that the product has beaten the competition in taste tests.