Thursday, May 3, 2018

Friday Thinking 4 May 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Content
Quotes:

Articles:



Stanislaw Lem thus concludes that if our technological civilization is to avoid falling into decay, human obsolescence in one form or another is unavoidable. The sole remaining option for continued progress would then be the “automatization of cognitive processes” through development of algorithmic “information farms” and superhuman artificial intelligences. This would occur via a sophisticated plagiarism, the virtual simulation of the mindless, brute-force natural selection we see acting in biological evolution, which, Lem dryly notes, is the only technique known in the universe to construct philosophers, rather than mere philosophies.

The result is a disconcerting paradox, which Lem expresses early in the book: To maintain control of our own fate, we must yield our
agency to minds exponentially more powerful than our own, created through processes we cannot entirely understand, and hence potentially unknowable to us. This is the basis for Lem’s explorations of The Singularity, and in describing its consequences he reaches many conclusions that most of its present-day acolytes would share. But there is a difference between the typical modern approach and Lem’s, not in degree, but in kind.

Unlike the commodified futurism now so common in the bubble-worlds of Silicon Valley billionaires, Lem’s forecasts weren’t really about seeking personal enrichment from market fluctuations, shiny new gadgets, or simplistic ideologies of “disruptive innovation.” In Summa Technologiae and much of his subsequent work, Lem instead sought to map out the plausible answers to questions that today are too often passed over in silence, perhaps because they fail to neatly fit into any TED Talk or startup business plan: Does technology control humanity, or does humanity control technology? Where are the absolute limits for our knowledge and our achievement, and will these boundaries be formed by the fundamental laws of nature or by the inherent limitations of our psyche? If given the ability to satisfy nearly any material desire, what is it that we actually would want?

To Lem (and, to their credit, a sizeable number of modern thinkers), the Singularity is less an opportunity than a question mark, a multidimensional crucible in which humanity’s future will be forged.

“I feel that you are entering an age of metamorphosis; that you will decide to cast aside your entire history, your entire heritage and all that remains of natural humanity—whose image, magnified into beautiful tragedy, is the focus of the mirrors of your beliefs; that you will advance (for there is no other way), and in this, which for you is now only a leap into the abyss, you will find a challenge, if not a beauty; and that you will proceed in your own way after all, since in casting off man, man will save himself.”

The Book No One Read



Urbanisation might be the most profound change to human society in a century, more telling than colour, class or continent
At some unknown moment between 2010 and 2015, for the first time in human history, more than half the world’s population lived in cities. Urbanisation is unlikely to reverse. Every week since, another 3 million country dwellers have become urbanites. Rarely in history has a small number of metropolises bundled as much economic, political and cultural power over such vast swathes of hinterlands. In some respects, these global metropolises and their residents resemble one another more than they do their fellow nationals in small towns and rural area. Whatever is new in our global age is likely to be found in cities.

For centuries, philosophers and sociologists, from Jean-Jacques Rousseau to Georg Simmel, have alerted us to how profoundly cities have formed our societies, minds and sensibilities. The widening political polarisation between big cities and rural areas, in the United States as well as Europe, has driven home the point of quite how much the relationship between cities and the provinces, the metropolis and the country, shapes the political lives of societies. The history of cities is an extraordinary guide to understanding today’s world. Yet, compared with historians at large, as well as more present-minded scholars of urban studies, urban historians have not featured prominently in public conversation as of late.

A metropolitan world




Late on the night of October 4, 1957, Communist Party Secretary Nikita Khrushchev was at a reception at the Mariinsky Palace, in Kiev, Ukraine, when an aide called him to the telephone. The Soviet leader was gone a few minutes. When he reappeared at the reception, his son Sergei later recalled, Khrushchev’s face shone with triumph. “I can tell you some very pleasant and important news,” he told the assembled bureaucrats. “A little while ago, an artificial satellite of the Earth was launched.” From its remote Kazakh launchpad, Sputnik 1 had lifted into the night sky, blasting the Soviet Union into a decisive lead in the Cold War space race.
News of the launch spread quickly. In the US, awestruck citizens wandered out into their backyards to catch a glimpse of the mysterious orb soaring high above them in the cosmos. Soon the public mood shifted to anger – then fear. Not since Pearl Harbour had their mighty nation experienced defeat. If the Soviets could win the space race, what might they do next?

Keen to avert a crisis, President Eisenhower downplayed Sputnik’s significance. But, behind the scenes, he leapt into action. By mid-1958 Eisenhower announced the launch of a National Aeronautics and Space Administration (better known today as Nasa), along with a National Defense and Education Act to improve science and technology education in US schools. Eisenhower recognised that the battle for the future no longer depended on territorial dominance. Instead, victory would be achieved by pushing at the frontiers of the human mind.

Sixty years later, Chinese President Xi Jinping experienced his own Sputnik moment. This time it wasn’t caused by a rocket lifting off into the stratosphere, but a game of Go – won by an AI. For Xi, the defeat of the Korean Lee Sedol by DeepMind’s Alpha Go made it clear that artificial intelligence would define the 21st century as the space race had defined the 20th.

The event carried an extra symbolism for the Chinese leader. Go, an ancient Chinese game, had been mastered by an AI belonging to an Anglo-American company. As a recent Oxford University report confirmed, despite China’s many technological advances, in this new cyberspace race, the West had the lead.

China’s children are its secret weapon in the global AI arms race




The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.

The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.

Against metrics: how measuring performance by numbers backfires




The power and potential of computation to tackle important problems has never been greater. In the last few years, the cost of computation has continued to plummet. The Pentium IIs we used in the first year of Google performed about 100 million floating point operations per second. The GPUs we use today perform about 20 trillion such operations — a factor of about 200,000 difference — and our very own TPUs are now capable of 180 trillion (180,000,000,000,000) floating point operations per second.

Even these startling gains may look small if the promise of quantum computing comes to fruition. For a specialized class of problems, quantum computers can solve them exponentially faster. For instance, if we are successful with our 72 qubit prototype, it would take millions of conventional computers to be able to emulate it. A 333 qubit error-corrected quantum computer would live up to our name, offering a 10,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000x speedup.

There are several factors at play in this boom of computing. First, of course, is the steady hum of Moore’s Law, although some of the traditional measures such as transistor counts, density, and clock frequencies have slowed. The second factor is greater demand, stemming from advanced graphics in gaming and, surprisingly, from the GPU-friendly proof-of-work algorithms found in some of today’s leading cryptocurrencies, such as Ethereum. However, the third and most important factor is the profound revolution in machine learning that has been building over the past decade. It is both made possible by these increasingly powerful processors and is also the major impetus for developing them further.

The new spring in artificial intelligence is the most significant development in computing in my lifetime. When we started the company, neural networks were a forgotten footnote in computer science; a remnant of the AI winter of the 1980’s. Yet today, this broad brush of technology has found an astounding number of applications.

Every month, there are stunning new applications and transformative new techniques. In this sense, we are truly in a technology renaissance, an exciting time where we can see applications across nearly every segment of modern society.

However, such powerful tools also bring with them new questions and responsibilities. How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?

Sergey Brin - Alphabet - 2017 Founders’ Letter




Added to this, there is technological anxiety, too – what is it to be a man when there are so many machines? Thus, Dilov invents a Fourth Law of Robotics, to supplement Asimov’s famous three, which states that ‘the robot must, in all circumstances, legitimate itself as a robot’. This was a reaction by science to the roboticists’ wish to give their creations ever more human qualities and appearance, making them subordinate to their function – often copying animal or insect forms.

Finally, it was Kesarovski’s time. He was a populariser of science, often writing computer guides for children, as well as essays that lauded information technology as a solution to future problems. This was reflected in his short stories, three of which were published in the collection The Fifth Law of Robotics (1983). In the first, he explored a vision of the human body as a cybernetic machine. A scientist looking for proof of alien consciousness finds it – in his own blood cells. Deciphering messages sent by an alien mind, trying to decode what their descriptions of society actually mean, he gradually comes to understand his own body as a sort of robot.

Kesarovski’s vision of nesting cybernetic machines – turtles all the way down or up – indicates his own training as one of the regime’s specialists: he was a more optimistic writer than Dilov.

In Kesarovski’s telling, the Fifth Law [or Robotics] states that ‘a robot must know it is a robot’. As the novella progresses, we face a cyborg that melds the best machine and human mind together…  For Kesarovski, computers and robots held dangers, but also a promise, if humanity could one day see that it was both a type of robot itself, and in a position only to gain from the machines’ powers, allowing it to attain the next step in its historical progress.

Communist robot dreams





This is a nice summary of the history of AI to this point by Rodney Brookes.
The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.

The Origins of “Artificial Intelligence”

THE EARLY DAYS
It is generally agreed that John McCarthy coined the phrase “artificial intelligence” in the written proposal for a 1956 Dartmouth workshop, dated August 31st, 1955. It is authored by, in listed order, John McCarthy of Dartmouth, Marvin Minsky of Harvard, Nathaniel Rochester of IBM and Claude Shannon of Bell Laboratories. Later all but Rochester would serve on the faculty at MIT, although by early in the sixties McCarthy had left to join Stanford University. The nineteen page proposal has a title page and an introductory six pages (1 through 5a), followed by individually authored sections on proposed research by the four authors. It is presumed that McCarthy wrote those first six pages which include a budget to be provided by the Rockefeller Foundation to cover 10 researchers.

The title page says A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE. The first paragraph includes a sentence referencing “intelligence”:
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

And then the first sentence of the second paragraph starts out:
The following are some aspects of the artificial intelligence problem:

That’s it! No description of what human intelligence is, no argument about whether or not machines can do it (i.e., “do intelligence”), and no fanfare on the introduction of the term “artificial intelligence” (all lower case).


This is a great summary from Nature, of the current state regarding memristors and their potential for a new computational paradigm.

The future of electronics based on memristive systems

Abstract
A memristor is a resistive device with an inherent memory. The theoretical concept of a memristor was connected to physically measured devices in 2008 and since then there has been rapid progress in the development of such devices, leading to a series of recent demonstrations of memristor-based neuromorphic hardware systems. Here, we evaluate the state of the art in memristor-based electronics and explore where the future of the field lies. We highlight three areas of potential technological impact: on-chip memory and storage, biologically inspired computing and general-purpose in-memory computing. We analyse the challenges, and possible solutions, associated with scaling the systems up for practical applications, and consider the benefits of scaling the devices down in terms of geometry and also in terms of obtaining fundamental control of the atomic-level dynamics. Finally, we discuss the ways we believe biology will continue to provide guiding principles for device innovation and system optimization in the field.


This is another signal of the emergence of a new scientific paradigm into everyday reality.

Spooky quantum entanglement goes big in new experiments

Two teams entangled the motions of two types of small, jiggling devices
Quantum entanglement has left the realm of the utterly minuscule, and crossed over to the just plain small. Two teams of researchers report that they have generated ethereal quantum linkages, or entanglement, between pairs of jiggling objects visible with a magnifying glass or even the naked eye — if you have keen vision.

Physicist Mika Sillanpää and colleagues entangled the motion of two vibrating aluminum sheets, each 15 micrometers in diameter — a few times the thickness of spider silk. And physicist Sungkun Hong and colleagues performed a similar feat with 15-micrometer-long beams made of silicon, which expand and contract in width in a section of the beam. Both teams report their results in the April 26 Nature.

“It’s a first demonstration of entanglement over these artificial mechanical systems,” says Hong, of the University of Vienna. Previously, scientists had entangled vibrations in two diamonds that were macroscopic, meaning they were visible (or nearly visible) to the naked eye. But this is the first time entanglement has been seen in macroscopic structures constructed by humans, which can be designed to meet particular technological requirements.


It’s taking longer than many anticipated to execute powerful augmented reality - here’s one very good signal.
“There are lots of applications for this technology, including in teaching, physiotherapy, laparoscopic surgery and even surgical planning,” said Watts, who developed the technology with fellow graduate student Michael Fiest.

Augmented reality system lets doctors see under patients’ skin without the scalpel

New technology lets clinicians see patients’ internal anatomy displayed right on the body.
New technology is bringing the power of augmented reality into clinical practice.
The system, called ProjectDR, allows medical images such as CT scans and MRI data to be displayed directly on a patient’s body in a way that moves as the patient does.

“We wanted to create a system that would show clinicians a patient’s internal anatomy within the context of the body,” explained Ian Watts, a computing science graduate student and the developer of ProjectDR.

The technology includes a motion-tracking system using infrared cameras and markers on the patient’s body, as well as a projector to display the images. But the really difficult part, Watts explained, is having the image track properly on the patient’s body even as they shift and move. The solution: custom software written by Watts that gets all of the components working together.


This is a must see gif - a 2 sec video.

Girl lost both her hands as a baby. Here she is testing the precision and dexterity of her new 3D-printed bionics




This is an important signal to watch - right now it’s definitely a weak signal - but in the coming decades it may displace many current professional sports domains. The key is not just the looming ‘virtual reality’ dimension - it is the blending of very wide variety of participation by pros and fans, spectatorship and production of a vast diversity of gaming genres and actual games.

NINJA’S FORTNITE TOURNAMENT WAS AN EXHILARATING AND UNPRECEDENTED E-SPORTS EXPERIMENT

The future of e-sports may be in hybrid entertainment that puts fans, pros, and streamers together in the same server
As I found myself seated at a gaming PC on the floor of the Luxor hotel’s exuberant Esports Arena, preparing to play Fortnite among the best players in the world, I can say my confidence levels were not very high. I had never played video games competitively before, though not for lack of trying. I consider myself an above average player of most shooters, from the early days of Halo and Call of Duty to now Destiny and Overwatch. Yet the feeling of playing under this kind of pressure and against players of this caliber was alien to me.

But I went to Las Vegas last weekend to see what the first big Fortnite e-sports tournament was going to look like, and specifically how it would feel to participate in it. Unlike most e-sports competitions, this one let members of the public compete, and it all centered on the chance to play against Tyler “Ninja” Blevins, the most popular streamer on Twitch and one of the world’s most talented Fortnite players. Just a few minutes into my first match on Saturday evening, one of nine consecutive games Ninja would participate in, I found myself under fire. Seconds later, an opponent descended on me and took me out with a shotgun blast. I never stood a chance.

I looked up at the big screen behind me and off to the left, an enormous monitor featuring Ninja’s perspective spanning the entire back wall of the arena. Below it, Ninja was playing on his own custom machine located center stage. I wanted to see whether it was the Twitch star that had taken me out. Thankfully, it wasn’t; my poor performance wasn’t broadcasted to hundreds of thousands of people watching online. But in a way, it would have been an honor to say I got to personally face off against one of the best, even if I inevitably lost. And that’s precisely what made the event, officially called Ninja Vegas 18, such an unprecedented e-sports experiment.

As for Ninja’s event, it was a resounding success. Although Ninja won only one of his nine games, the level of competition from both professional players and relatively unknown competitors was wildly entertaining, creating dozens of crowd-pleasing moments and surprise victories. And viewers agreed — more than 667,000 people tuned in to Ninja’s personal Twitch stream at the tournament’s peak. It broke the platform’s all-time concurrent viewer record Ninja himself set back in March when he live streamed a Fortnite session with Drake, NFL player JuJu Smith-Schuster, and rapper Travis Scott.


This is an interesting 5 min read about the difference between the ‘evils of adtech’ versus real advertising. There are some worthwhile links in the article.

How True Advertising Can Save Journalism From Drowning in a Sea of Content

Journalism is in a world of hurt because it has been marginalized by a new business model that requires maximizing “content” instead. That model is called adtech.

We can see adtech’s effects in The New York Times’ In New Jersey, Only a Few Media Watchdogs Are Left, by David Chen. His prime example is the Newark Star-Ledger, “which almost halved its newsroom eight years ago,” and “has mutated into a digital media company requiring most reporters to reach an ever-increasing quota of page views as part of their compensation.”

That quota is to attract adtech placements.
While adtech is called advertising and looks like advertising, it’s actually a breed of direct marketing, which is a cousin of spam and descended from what we still call junk mail.


Quorum Sensing is a widespread strategy for group decisioning at many levels of life - from bacteria (e.g. slime molds), insects, and mammals - perhaps even ecosystems.

How to Sway a Baboon Despot

What other species can teach us about democracy
Early last year, more than 70 years after its publication, George Orwell’s Animal Farm appeared on The Washington Post’s best-seller list. A writer for the New York Observer declared the novel—an allegory involving a government run by pigs—a “guidepost” for politics in the age of Donald Trump. A growing body of research, however, suggests that animals may offer political lessons that are more than allegorical: Many make decisions using familiar political systems, from autocracy to democracy. How these systems function, and when they falter, may be telling for Homo sapiens.

As in human democracies, the types of votes in animal democracies vary. When deciding where to forage, for instance, Tonkean macaques line up behind their preferred leader; the one with the most followers wins. Swans considering when to take flight bob their heads until a “threshold of excitability” is met, at which point they collectively rise into the sky. Honeybee colonies needing a new home vote on where to go: Thomas Seeley, a Cornell biologist, has found that scout bees investigate the options and inform the other bees of potential sites through complex “waggle dances” that convey basic information (distance, direction, overall quality). When a majority is won over by a scout’s campaign, the colony heads for its new home.

Research also shows that animal democracies, like human ones, can go awry. For instance, Seeley found that bees sometimes chose a mediocre—even terrible—site over an objectively better option. When this happened, it was invariably because they had “satisficed”—that is, settled for a plausible choice that came in early, rather than waiting for more options. Seeley told me he once saw several bees return to a hive and perform “unenthusiastic, lethargic” dances. With no great choices, they began coalescing around the best of the middling ones. At the last minute, though, “one bee came back, and she was so excited,” Seeley said. “She danced and danced and danced. She must have found something wonderful. But it was too late.” The bees had picked their candidate; momentum carried the day.

Why do bees take a vote to begin with, though? In 2013, researchers at the Max Planck Institute for Human Development, the London School of Economics, and the University of Sussex used game theory to show that animals’ willingness to behave democratically redounds to their benefit. Compared with decisions handed down by tyrant leaders, democratic decisions are less likely to be flawed. Moreover, when animals have a chance to register their opinion, the gap between the average individual’s preferred outcome and the actual outcome tends to be smaller than it would be if the decision were made by fiat. In this way, animal democracy is stabilizing; few get their way, but most are relatively content.


This is an amazing development in our understanding of how cellular ecosystems can communicate and exchange matter between themselves - especially when under stress. This is a complementary means to horizontal gene transfer. This illustrating gif is worth the view.

Cells Talk and Help One Another via Tiny Tube Networks

How did the tunneling nanotubes go unnoticed for such a long time? Lou notes that in the last couple of decades, cancer research has centered primarily on detecting and therapeutically targeting mutations in cancer cells — and not the structures between them. “It’s right in front of our face, but if that’s not what people are focusing on, they’re going to miss it,” he said.

That’s changing now. In the last few years, the number of researchers working on TNTs and figuring out what they do has risen steeply. Research teams have discovered that TNTs transfer all kinds of cargo beyond microRNAs, including messenger RNAs, proteins, viruses and even whole organelles, such as lysosomes and mitochondria.

To understand whether or not the cells actively regulate these transfers, Haimovich challenged them with heat shock and oxidative stress. If changes in the environmental conditions changed the rate of RNA transfer, that “would suggest that this is a biologically regulated mechanism, not just diffusion of RNA by chance,” he explained. He found that oxidative stress did induce an increase in the rate of transfer, while heat shock induced a decrease. Moreover, this effect was seen if stress was inflicted on acceptor cells but not if it was also inflicted on donor cells prior to co-culture, Haimovich clarified by email. “This suggests that acceptor cells send signals to the donor cells ‘requesting’ mRNA from their neighbors,” he said. His results were reported in the Proceedings of the National Academy of Sciences last year.

“Our general hypothesis is that when a cell is in danger or is dying or is stressed, the cell tries to implement a way of communication that is normally used during development, because we believe that these TNTs are more for fast communication in a developing organism,” she said. “However, when the cell is affected by a disease or infected by a virus or prion, the cell is stressed out, and it sends these protrusions to try to get help from cells that are in good health — or to discharge the prions.”


The world of plants is full of surprises.

Trees are not as 'sound asleep' as you may think

High-precision three-dimensional surveying of 21 different species of trees has revealed a yet unknown cycle of subtle canopy movement during the night. The 'sleep cycles' differed from one species to another. Detection of anomalies in overnight movement could become a future diagnostic tool to reveal stress or disease in crops.

Overnight movement of leaves is well known for tree species belonging to the legume family, but it was only recently discovered that some other trees also lower their branches by up to 10 centimeters at night and then back in the morning. These branch movements are slow and subtle, and take place at night, which makes them difficult to identify with the naked eye. However, terrestrial laser scanning, a 3-dimensional surveying technique developed for precision mapping of buildings, makes it possible to measure the exact position of branches and leaves.


This is a very interesting project - how individuals, crowdsourced and other funding can catalyze ways to mitigate large problems.
“I would never be able to work on a photo-sharing app or ‘internet startup XYZ,'” he says. “I think people overestimate the risk of high-risk projects. Personally, I think I would find it much harder to make a photo-sharing app a success–it sounds counterintuitive, because it’s much easier from an engineering perspective, but I think if you work on something that’s truly exciting and bold and complicated, then you will attract the kind of people that are really smart and talented. People that like solving complicated problems.”

The Revolutionary Giant Ocean Cleanup Machine Is About To Set Sail

Boyan Slat dropped out of school to work on his design for a device that could collect the trillions of pieces of plastic floating in the ocean. After years of work, it’s ready to take its first voyage.
Six years ago, the technology was only an idea presented at a TEDx talk. Boyan Slat, the 18-year-old presenter, had learned that cleaning up the tiny particles of plastic in the ocean could take nearly 80,000 years. Because of the volume of plastic spread through the water, and because it is constantly moving with currents, trying to chase it with nets would be a losing proposition. Slat instead proposed using that movement as an advantage: With a barrier in the water, he argued, the swirling plastic could be collected much more quickly. Then it could be pulled out of the water and recycled.

Some scientists have been skeptical that the idea is feasible. But Slat, undeterred, dropped out of his first year of university to pursue the concept, and founded a nonprofit to create the technology, The Ocean Cleanup, in 2013. The organization raised $2.2 million in a crowdfunding campaign, and other investors, including Salesforce CEO Marc Benioff, brought in millions more to fund research and development. By the end of 2018, the nonprofit says it will bring back its first harvest of ocean plastic back from the North Pacific Gyre, along with concrete proof that the design works. The organization expects to bring 5,000 kilograms of plastic ashore per month with its first system. With a full fleet of systems deployed, it believes that it can collect half of the plastic trash in the Great Pacific Garbage Patch–around 40,000 metric tons–within five years.


I have to say - that I love this paradox of sensorial goodness and gustatorial wonder.

A Paean to PB&P

Why a peanut butter and pickle sandwich is the totally not-gross snack you need in your mouth right now.
Dwight Garner, an accomplished New York Times book critic, can count himself a member of the rarefied club of journalists whose writing has actually moved hearts and minds on a topic of great importance. In one 2012 article, he changed my life, intimately and permanently, with an ode to an object I’d never previously considered with the solemnity it deserves: the peanut butter and pickle sandwich.

When I clicked on Garner’s piece “Peanut Butter Takes On an Unlikely Best Friend” in October 2012, it was with great skepticism. I expected to be trolled with outrageous, unsupported assertions and straw-man arguments. Instead, I found myself drawn in by lip-smacking prose and a miniature history lesson. Peanut butter and pickle sandwiches were a hit at Depression-era lunch counters, I learned, and in cookbooks from the 1930s and ’40s, which recommended they be crafted with pickle relish rather than slices or spears. Garner quoted the founder of a peanut butter company, who remarked that the savory-and-sour flavor profile of the sandwich is more common in South and East Asian cuisines. This observation was my eureka moment: One of my favorite Thai dishes, papaya salad, traditionally combines raw peanuts with a lime and rice vinegar–based dressing. Perhaps the sandwich I’d only ever imagined in the context of stupid jokes about pregnancy cravings could be equally delicious.

1 comment:

  1. It's so great! I'm impressed! I really found it by mistake while was looking for quality essay on a similar topic. Much thanks for sharing

    ReplyDelete