Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.) that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.
Many thanks to those who enjoy this. ☺
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - Work is just beginning. Work that engages our whole self becomes play that works. Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How
“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9
Content
Quotes:
Articles:
Arguably, the 2008 financial crisis was different, because, as the theorist Geoffrey West writes, it was “stimulated by misconceived dynamics in the parochial and relatively localized US mortgage industry,” and thus exposed “the challenges of understanding complex adaptive systems.”
It is important to acknowledge such challenges. But an even more important point to note is that the profiles of the world’s largest companies today are very different from those of a decade ago. In 2008, PetroChina, ExxonMobil, General Electric, China Mobile, and the Industrial and Commercial Bank of China were among the firms with the highest market capitalization. In 2018, that status belongs to the so-called FAANG cluster: Facebook, Amazon, Apple, Netflix, and Alphabet (Google’s parent company).
Against this backdrop, it is no surprise that the US Federal Reserve’s annual symposium in Jackson Hole, Wyoming, last month focused on the dominance of digital platforms and the emergence of “winner-take-all” markets, not global debt. This newfound awareness reflects the fact that it is intangible assets like digital software, not physical manufactured goods, that are driving the new phase of global growth.
Bill Gates, the founder of Microsoft, recently explained this profound shift in a widely shared blog post. “The portion of the world’s economy that doesn’t fit the old model just keeps getting larger,” he writes. And this development “has major implications for everything from tax law to economic policy to which cities thrive and which cities fall behind.” The problem is that, “in general, the rules that govern the economy haven’t kept up. This is one of the biggest trends in the global economy that isn’t getting enough attention.”
Looking Beyond Lehman
The existence of simplicity in the description of underlying fundamental laws is not the only way that simplicity arises in science. The existence of multiple levels implies that simplicity can also be an emergent property. This means that the collective behavior of many elementary parts can behave simply on a much larger scale.
The study of complex systems focuses on understanding the relationship between simplicity and complexity. This requires both an understanding of the emergence of complex behavior from simple elements and laws, as well as the emergence of simplicity from simple or complex elements that allow a simple larger scale description to exist.
Whenever we are describing a simple macroscopic behavior, it is natural that the number of microscopic parameters relevant to model this behavior must be small. This follows directly from the simplicity of the macroscopic behavior. On the other hand, if we describe a complex macroscopic behavior, the number of microscopic parameters that are relevant must be large.
Yaneer Bar-Yam - Emergence of simplicity and complexity
On Bad News and Good News:
The bad news is cyberwar. It’s looking extremely powerful. It doesn’t have any rules yet. And it will only get rules through some pretty wretched excesses and disasters. And it’s going to take the world pretty much understanding and acting as one—which has never happened before. But I’m hopeful. Kevin Kelly’s line is that it’s pretty obvious we’re going to have to have global governance. That is what it will take to develop the rules of cyberwarfare.
And here’s my hopeful version: Climate change is forcing humanity to act as one to solve a problem that we created. It’s not like the Cold War. Climate change is like a civilizational fever. And we’ve got to find various ways to understand the fever and cure it in aggregate.
All this suggests that this century will be one where a kind of planetary civilization wakes up and discovers itself, that we are as gods and we have to get good at it.
Stewart Brand and the Tools That Will Make the Whole Earth
This is a good signal of the exploration of the possibility space for the emergence of new institutions related to the digital environment. While a ‘Auditor General of Algorithms’ isn’t mentioned - this list makes such an institution more plausible.
12 Organizations Saving Humanity from the Dark Side of AI
Artificial Intelligence (AI) can help do many incredible things, from detecting cancer to driving our cars but it also raises many questions and poses new challenges. Stephen Hawking, the renowned physicist once told the BBC, “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Ensuring that this transition to the age of AI remains beneficial for humanity remains one of the greatest challenges of our time and here are 12 organizations working on saving humanity from the dark side of AI.
This is another vital institution -that was a gift to the world and that is threatened with an ‘enclosure movement’ by privateers - Let’s hope this new gift is more resistant to capture by rent-seekers.
Tim Berners-Lee tells us his radical new plan to upend the World Wide Web
With an ambitious decentralized platform, the father of the web hopes it’s game on for corporate tech giants like Facebook and Google.
This week, Berners-Lee will launch Inrupt a startup that he has been building, in stealth mode, for the past nine months. Backed by Glasswing Ventures, its mission is to turbocharge a broader movement afoot, among developers around the world, to decentralize the web and take back power from the forces that have profited from centralizing it. In other words, it’s game on for Facebook, Google, Amazon. For years now, Berners-Lee and other internet activists have been dreaming of a digital utopia where individuals control their own data and the internet remains free and open. But for Berners-Lee, the time for dreaming is over.
“We have to do it now,” he says, displaying an intensity and urgency that is uncharacteristic for this soft-spoken academic. “It’s a historical moment.” Ever since revelations emerged that Facebook had allowed people’s data to be misused by political operatives, Berners-Lee has felt an imperative to get this digital idyll into the real world. In a post published this weekend, Berners-Lee explains that he is taking a sabbatical from MIT to work full time on Inrupt. The company will be the first major commercial venture built off of Solid, a decentralized web platform he and others at MIT have spent years building.
A NETSCAPE FOR TODAY’S INTERNET
If all goes as planned, Inrupt will be to Solid what Netscape once was for many first-time users of the web: an easy way in. And like with Netscape, Berners-Lee hopes Inrupt will be just the first of many companies to emerge from Solid .
Well I’m not sure that the general concept of exponential rates of change and improvement are over - but definitely we are on the threshold of new paradigms including computational ones.
“Revolutionary new hardware architectures and new software languages, tailored to dealing with specific kinds of computing problems, are just waiting to be developed,” he said. “There are Turing Awards waiting to be picked up if people would just work on these things.”
It’s Time for New Computer Architectures and Software Languages
Moore’s Law is over, ushering in a golden age for computer architecture, says RISC pioneer
David Patterson—University of California professor, Google engineer, and RISC pioneer—says there’s no better time than now to be a computer architect.
That’s because Moore’s Law really is over, he says: “We are now a factor of 15 behind where we should be if Moore’s Law were still operative. We are in the post–Moore’s Law era.”
This means, Patterson told engineers attending the 2018 @Scale Conference held in San Jose last week, that “we’re at the end of the performance scaling that we are used to. When performance doubled every 18 months, people would throw out their desktop computers that were working fine because a friend’s new computer was so much faster.”
But last year, he said, “single program performance only grew 3 percent, so it’s doubling every 20 years. If you are just sitting there waiting for chips to get faster, you are going to have to wait a long time.”
Imagine an AI monitored hospital - real-time-all-the-time monitoring of patients. This is a good signal of the possibilities.
ICUs are among the most expensive components of the health care system. About 55,000 patients are cared for in an ICU every day, with the typical daily cost ranging from US $3,000 to $10,000. The cumulative cost is more than $80 billion per year.
AI Could Provide Moment-by-Moment Nursing for a Hospital’s Sickest Patients
In the intensive care unit, artificial intelligence can keep watch at a patient’s bedside
In a hospital’s intensive care unit (ICU), the sickest patients receive round-the-clock care as they lie in beds with their bodies connected to a bevy of surrounding machines. This advanced medical equipment is designed to keep an ailing person alive. Intravenous fluids drip into the bloodstream, while mechanical ventilators push air into the lungs. Sensors attached to the body track heart rate, blood pressure, and other vital signs, while bedside monitors graph the data in undulating lines. When the machines record measurements that are outside of normal parameters, beeps and alarms ring out to alert the medical staff to potential problems.
While this scene is laden with high tech, the technology isn’t being used to best advantage. Each machine is monitoring a discrete part of the body, but the machines aren’t working in concert. The rich streams of data aren’t being captured or analyzed. And it’s impossible for the ICU team—critical-care physicians, nurses, respiratory therapists, pharmacists, and other specialists—to keep watch at every patient’s bedside.
The ICU of the future will make far better use of its machines and the continuous streams of data they generate. Monitors won’t work in isolation, but instead will pool their information to present a comprehensive picture of the patient’s health to doctors. And that information will also flow to artificial intelligence (AI) systems, which will autonomously adjust equipment settings to keep the patient in optimal condition.
This is an very important signal regarding the enhancement of human capability.
The first “social network” of brains lets three people transmit thoughts to each other’s heads
BrainNet allows collaborative problem-solving using direct brain-to-brain communication.
The ability to send thoughts directly to another person’s brain is the stuff of science fiction. At least, it used to be.
In recent years, physicists and neuroscientists have developed an armory of tools that can sense certain kinds of thoughts and transmit information about them into other brains. That has made brain-to-brain communication a reality.
These tools include electroencephalograms (EEGs) that record electrical activity in the brain and transcranial magnetic stimulation (TMS), which can transmit information into the brain.
In 2015, Andrea Stocco and his colleagues at the University of Washington in Seattle used this gear to connect two people via a brain-to-brain interface. The people then played a 20 questions–type game.
An obvious next step is to allow several people to join such a conversation, and today Stocco and his colleagues announced they have achieved this using a world-first brain-to-brain network. The network, which they call BrainNet, allows a small group to play a collaborative Tetris-like game. “Our results raise the possibility of future brain-to-brain interfaces that enable cooperative problem-solving by humans using a ‘social network’ of connected brains,” they say.
This is an important report on the current state of the ‘Platform Economy’ from JPMorgan Chase.
The Online Platform Economy in 2018 - Drivers, Workers, Sellers, and Lessors
Technological innovation is transforming economic exchange. Just a decade ago, the Online Platform Economy comprised a handful of marketplaces connecting independent sellers to buyers of physical goods. Today, many consumers use software platforms to procure almost any kind of good or service from independent suppliers as a routine part of daily life. Have these innovations created viable new options for making a living?
The Online Platform Economy is growing. As it grows, its sectors are diverging in important ways, raising the question as to whether they require tailored policy approaches. While freelance driving has been the engine of growth for the Online Platform Economy, it is not a full time job for most participants. In fact, alongside the rapid growth in the number of drivers has come a steady decline in average monthly earnings. Non-transportation work platforms continue to innovate on the types of contracts between independent suppliers and their customers. In selling and leasing sectors, high platform earnings are concentrated among a few participants. More broadly, we do not find evidence that the Online Platform Economy is replacing traditional sources of income for most families. Taken together, our findings indicate that regardless of whether or not platform work could in principle represent the “future of work,” most participants are not putting it to the type of use that would usher in that future.
Speaking of new forms of architecture this is a good signal of the emerging or distributed ledger systems for tracking things with a transparent provenance.
"Our customers deserve a more transparent supply chain," said Frank Yiannas, vice president of food safety for Walmart, in a statement Monday. "This is a smart, technology-supported move that will greatly benefit our customers and transform the food system."
Walmart picks blockchain to track food safety with veggie suppliers
The company will be able to track exactly where that head of romaine lettuce came from.
Want to sell your vegetables through Walmart's formidably large network of stores? Better get on the blockchain.
Blockchain, a technology for creating a single ledger of transactions shared among the many parties involved, is potentially a big deal. That's certainly how Walmart sees it, requiring all suppliers of leafy green vegetables for Walmart's 5,358 stores to track their food with blockchain within one year to improve food safety.
Blockchain is in a bit of a rough patch right now, as some skeptics roll their eyes at attempts to cure all the world's ills with the technology. But that doesn't mean it's complete bunk. Walmart's endorsement shows a major corporate power believes that tracking food shipments can be a lot better with blockchain's distributed ledger.
This is a very important signal - perhaps suggesting that in the new ‘hardware architectures’ also should include software and human networking-participatory architectures - emulating a shift from single celled organisms to large multi-celled organisms. The extended mind concept creating more powerful collective intelligence and group consciousness.
“It went really well,” says Matthew Lungren, a pediatric radiologist at Stanford University Medical School, coauthor on the paper and one of the eight participants. “Before, we had to show [an X-ray] to multiple people separately and then figure out statistical ways to bring their answers to one consensus. This is a much more efficient and, frankly, more evidence-based way to do that.”
“We shouldn’t throw away human knowledge, wisdom, and experience,” says Louis Rosenberg, CEO and founder of Unanimous AI. “Instead, let’s look at how we can use AI to leverage those things.”
AI-Human “Hive Mind” Diagnoses Pneumonia
A small group of doctors moderated by AI algorithms made a more accurate diagnosis than individual physicians or AI alone
First, it correctly predicted the top four finishers at the Kentucky Derby. Then, it was better at picking Academy Award winners than professional movie critics—three years in a row. The cherry on top was when it prophesied that the Chicago Cubs would end a 108-year dry spell by winning the 2016 World Series—four months before the Cubs were even in the playoffs. (They did.)
Now, this AI-powered predictive technology is turning its attention to an area where it could do some real good—diagnosing medical conditions.
In a study presented on Monday at the SIIM Conference on Machine Intelligence in Medical Imaging in San Francisco, Stanford University doctors showed that eight radiologists interacting through Unanimous AI’s “swarm intelligence” technology were better at diagnosing pneumonia from chest X-rays than individual doctors or a machine-learning program alone.
This is a great must read signal supporting the need for AI-Human ‘hive mind’ applied to science and the publication of science research. There is too much to know and there is even more to learn from our failures. Especially for the complex and human sciences
‘Journalologists’ use scientific methods to study academic publishing. Is their work improving science?
The Chicago, Illinois, meeting marked the birth of what is now sometimes called journalology, a term coined by Stephen Lock, a former editor of The British Medical Journal (The BMJ). Its goal: improving the quality of at least a slice of the scientific record, in part by creating an evidence-based protocol for the path from the design of a study to its publication. That medical journals took a leading role isn't surprising. A sloppy paper on quantum dots has never killed anyone, but a clinical trial on a new cancer drug can mean the difference between life and death.
The field has grown steadily and has spurred important changes in publication practices. Today, for example, authors register a clinical trial in advance if they want it considered for publication in a major medical journal, so it doesn't vanish if the results aren't as hoped. And authors and journal editors often pledge to include in their papers details important for assessing and replicating a study. But almost 30 years on, plenty of questions remain, says clinical epidemiologist David Moher of The Ottawa Hospital Research Institute, a self-described journalologist. Moher—who once thought his dyslexia explained why he couldn't understand so much published research—wants to know whether the reporting standards that journals now embrace actually make papers better, for instance, and whether training for peer reviewers and editors is effective.
Finding the answers isn't easy. Journalology still hovers on the edge of respectable science, in part because it's often competing with medicine for dollars and attention. Journals are also tough to study and sometimes secretive, and old habits die hard. "It's hard," Moher says, "to be a disruptor in this area."
This is about more than Uber - it about how we collectively harness 'platforms' of costless coordination to be the public infrastructure of the 21st Century ... OK the first half of the 21st Century. This is filed under - imagine if Facebook had become a Foundation similar to Wikimedia?
What if Uber Was Owned and Governed by Its Drivers?
The rise of platform cooperatives
We have an epic choice before us between platform coops and Death Star platforms, and the time to decide is now. It might be the most important economic decision we ever make, but most of us don’t even know we have a choice.
And just what is a Death Star platform? Bill Johnson of StructureC3 referred to Uber and Airbnb as Death Star platforms in a recent chat. The label struck me as surprisingly apt: it reflects the raw ambition and focused power of these platforms, particularly Uber.
Uber’s big bet is global monopoly or bust. They’ve raised over $8 billion in venture capital, are on track to do over $10 billion in revenue this year, and have an estimated 200,000 drivers who are destroying the taxi industry in over 300 cities worldwide. They’ve done all this in just over five years. In fact, they reached a $51 billion valuation faster than Facebook, and plan to raise even more money. If they’re successful, they’ll become the most valuable startup in history. Airbnb is nearly as big and ambitious.
Platform coops are the alternative to Death Stars. As Lisa Gansky urged, these platforms share value with the people who make them valuable. Platform coops combine a cooperative business structure with an online platform to deliver a real-world service. What if Uber was owned and governed by its drivers? What if Airbnb was owned and governed by its hosts? That’s what an emerging movement is exploring for the entire sharing economy in an upcoming conference, Platform Cooperativism.
The self-driving car is still on the near-term horizon but self-driving transportation is emerging in the now.
Germany launches world's first autonomous tram in Potsdam
The Guardian goes for a ride on the new AI-driven Combino vehicle developed by Siemens
The world’s first autonomous tram was launched in unspectacular style in the city of Potsdam, west of Berlin, on Friday. The Guardian was the first English-language newspaper to be offered a ride on the vehicle developed by a team of 50 computer scientists, engineers, mathematicians, and physicists at the German engineering company Siemens.
Fitted with multiple radar, lidar (light from a laser), and camera sensors, forming digital eyes that film the tram and its surroundings during every journey, the tram reacts to trackside signals and can respond to hazards faster than a human.
Its makers say it is some way from being commercially viable but they do expect it to contribute to the wider field of driverless technology, and have called it an important milestone on the way to autonomous driving.
The key signal about the spread of AI in our digital environment is the speed of spreading a learning. When one person learns something important - one person has learned it and getting others to know what has been learned is the subject of pedagogy. But when one AI learns something - all instances of that AI also learn it. This latest signal is also an omen of the future of technological unemployment for many many. The inevitable surprises emerge when learnings can be combined.
MIT’s New Robot Taught Itself to Pick Things Up the Way People Do
Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items
appropriately—mimicking the way a baby gradually learns to use its hands.
Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.
This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.
The cyborg is here - only they are unevenly distributed. This is amazing news for some.
For Thomas, when she walked without help for the first time, “it was like watching fireworks, but from the inside,” she says. “Something I was never supposed to do ever just happened. It was awesome. There’s no other feeling like it in the world.” The device that Thomas calls “Junior” is a 16-electrode array that delivers electrical stimulation to her spinal cord. With intense training, and what Harkema calls “a whisper of an intent” from Thomas’ brain, the device has helped Thomas walk again.
TWO PEOPLE WITH PARALYSIS WALK AGAIN USING AN IMPLANTED DEVICE
‘It was like watching fireworks, but from the inside’
After Kelly Thomas’ truck flipped with her inside of it in 2014, she was told that she probably would never walk again. Now, with help from a spinal cord implant that she’s nicknamed “Junior,” Thomas is able to walk on her own.
Thomas and Jeff Marquis, who was paralyzed after a mountain biking accident, can now independently walk again after participating in a study at the University of Louisville that was published today in the New England Journal of Medicine. Thomas’ balance is still off and she needs a walker, but she can walk a hundred yards across grass. She also gained muscle and lost the nerve pain in her foot that has persisted since her accident. Another unnamed person with a spinal cord injury can now independently step across the ground with help from a trainer, according to a similar study at the Mayo Clinic that was also published today in the journal Nature Medicine.
Another signal of the emerging cyborg human 2.0 - but also ultimately the emergence of a ‘Social Credit’ world - retrieving (in a McLuhanesque sense) an neo-tribal word.
Why You’re Probably Getting a Microchip Implant Someday
Microchip implants are going from tech-geek novelty to genuine health tool—and you might be running out of good reasons to say no.
When Patrick McMullan first heard in early 2017 that thousands of Swedish citizens were unlocking their car doors and turning on coffee machines with a wave of their palm, he wasn’t too impressed. Sure, the technology—a millimeters-long microchip equipped with near-field communication capabilities and lodged just under the skin—had a niche, cutting-edge appeal, but in practical terms, a fob or passcode would work just as well.
McMullan, a 20-year veteran of the tech industry, wanted to do one better—to find a use for implantable microchips that was genuinely functional, not just abstractly nifty. In July 2017, news cameras watched as more than 50 employees at Three Square Market, the vending-solutions company where McMullan is president, voluntarily received chip implants of their own. Rather than a simple scan-to-function process like most of Sweden’s chips use, the chips and readers around Three Square Market’s River Falls, Wisconsin, office were all part of a multistage feedback network. For example: Your chip could grant you access to your computer—but only if it had already unlocked the front door for you that day. “Now,” McMullan says of last summer, “I’ve actually done something that enhances our network security.”
The problem McMullan’s chips cleverly solve is relatively small-scale—but it’s still a problem, and any potential new-use case represents a significant step forward for a chip evangelist like him. As with most technologies, the tipping point for implantable chips will come when they become so useful they’re hard to refuse. It could happen sooner than you think: In September 2017, Three Square Market launched an offshoot, Three Square Chip, that is developing the next generation of commercial microchip implants, with a slew of originative health features that could serve as the best argument yet that microchips’ benefits can outweigh our anxieties about them.
This is a great signal of emerging human enhancement - even though it’s not ready for primetime - this sort of technology is inevitable. Great for all sorts of activities.
Zooming contact lenses enlarge at a wink
A contact lens with a built in zoom that the wearer can switch at will between regular and telescopic vision could mean the end to dwindling independence for those with deteriorating eyesight, researchers suggested today. The rigid lens covers the majority of the front surface of the eye, including both the whites and the pupil, and contains an array of tiny aluminum mirrors that together can enlarge the world by 2.8x. Winking flips the view between regular and magnified.
Very interesting signal of the emerging capacity of domesticating DNA to enable a metabolic process of plastic management.
"Although the improvement is modest, this unanticipated discovery suggests that there is room to further improve these enzymes, moving us closer to a recycling solution for the ever-growing mountain of discarded plastics.
Researchers accidentally create mutant enzyme that eats plastic
An engineered enzyme that eats plastic could usher in a recycling revolution, scientists hope.
British researchers created the plastic-digesting protein accidentally while investigating its natural counterpart.
Tests showed that the lab-made mutant had a supercharged ability to break down polyethylene terephthalate (PET), one of the most popular forms of plastic employed by the food and drinks industry.
The new research sprang from the discovery of bacteria in a Japanese waste recycling centre that had evolved the ability to feed on plastic.
The bugs used a natural enzyme called PETase to digest PET bottles and containers.
While probing the molecular structure of PETase, the UK team inadvertently created a powerful new version of the protein.
Another good signal of progress on the development of new antibiotics.
There’s A New Antibiotic In Town, And We Can Create It In The Lab
A PROMISING DISCOVERY. Bacteria are shifty little things. We isolate antibiotics to kill them, and they evolve to dodge the attack. This is called antibiotic resistance, and it’s gotten so bad the United Nations (UN) actually declared it a crisis back in September 2016.
Now, a team of Chinese researchers think they’ve found a new weapon against antibiotic resistance: a fungal compound called albomycin δ2 that we can actually recreate in the lab.
They published their study Tuesday in the journal Nature Communications.
We knew from previous research that albomycins had antimicrobial properties, but it wasn’t until this research team dug into the fungal compounds that they learned that one specific compound, albomycin δ2, was especially adept at killing bacteria. It even outperformed a number of established antibiotics, including penicillin, when tested against the notoriously difficult to treat methicillin-resistant Staphylococcus aureus (MRSA).
It amazing what is emerging in the domains of biology and how we are constituted as WEs - multiple selves-as-a-self. It seems that we have at least three brains - the one in our skull, the one that defines our stomach and the one that is our intestines - and all of them - integrated.
Your gut is directly connected to your brain, by a newly discovered neuron circuit
The human gut is lined with more than 100 million nerve cells—it’s practically a brain unto itself. And indeed, the gut actually talks to the brain, releasing hormones into the bloodstream that, over the course of about 10 minutes, tell us how hungry it is, or that we shouldn’t have eaten an entire pizza. But a new study reveals the gut has a much more direct connection to the brain through a neural circuit that allows it to transmit signals in mere seconds. The findings could lead to new treatments for obesity, eating disorders, and even depression and autism—all of which have been linked to a malfunctioning gut.
The study reveals “a new set of pathways that use gut cells to rapidly communicate with … the brain stem,” says Daniel Drucker, a clinician-scientist who studies gut disorders at the Lunenfeld-Tanenbaum Research Institute in Toronto, Canada, who was not involved with the work. Although many questions remain before the clinical implications become clear, he says, “This is a cool new piece of the puzzle.”
This may be useful to anyone who wants to know more about searching on Google on it’s 20th birthday.
20 things you didn't know you could do with Search
Search has been helping you settle bets since 1998 (turns out there is some caffeine in decaf coffee), but now that we’re 20 years in, we’ve built dozens of other useful features and tools to help you get through your day. Let’s jump into some of these secret (and not-so-secret) tricks to up your Search game.
Search as your everyday sidekick
Here are some of the ways you can plan your day and stay in the know with Search.
A signal from the author of ‘Smart Mobs’ - a key Internet social media pioneer.
On quitting Facebook
I was one of the very first of the 2 billion Facebook users. I was teaching about social media at Stanford when Facebook was open only to Harvard, Princeton, and Stanford, so it only made sense for me to join right away. Like never needing paper maps and so many other changes that information technologies have brought to our lives, the changes on campus that Facebook brought seemed radical at the time, but are now just part of the landscape. Four years later — again, this was news back then, but just part of the landscape now — I was there when students were shocked that their drunken Facebook postings and photos affected their graduate school and employment prospects.
MySpace was a mess, but it was a creative mess, with users learning and borrowing from each other in a frenzy of customization.
My heart sank when Mark Z started talking about making Facebook “the social operating system of the web.” It was immediately obvious that a single, proprietary system would be a huge social and technical regression from the myriad ways people had been communicating on the social web.
As I have noted here before, I wrote in the mid 1990s about coming massive dataveillance — and very very few people seemed to care. I knew that Amazon was using data collected about my purchases and interests to suggest books to me, and although I recognized it as creeping dataveillance, I was one of many who not only didn’t care — I welcomed the convenience. When Facebook ads started trying to sell me things that I had looked at via other websites, that was creepy. But I did not comprehend what has come to be known as “surveillance capitalism” and the degree to which extremely detailed dossiers on the behavior of billions of users could be used to microtarget advertising (including, it turns out, to use gender and racial criteria to filter job and housing advertisements), and I don’t think anybody understood how that microtargetting could be joined with armies of bots and human trolls to influence public opinion, sow conflict, and (probably) swing elections.
It’s become clear that Facebook didn’t understand their inability to control such detailed surveillance of so many people, and to keep such a huge, widespread, and complex mechanism secure. It will never be secure. There will be breach after breach. They will never be able to fully automate and/or hire enough humans to protect the public sphere from computational propaganda that uses Facebook as a channel. (And in regard to the public sphere — does anybody really think anyone’s political opinions are changed by the sometimes vile and always time-wasting political arguments on Facebook?) And most recently, I learned that Facebook had sold my phone number to spammers after I had provided it (privately, I thought) to use to retrieve my account in case it was hacked.
I appreciate what Facebook enables for me. I have been able to stay in contact with people’s lives — many of whom would fall off my radar otherwise. I can share what I know, discover, and make. I can have fun. FB is a great source of gifs, videos, links, rants. I’ll be sorry to lose all that.
Well the irony.
Linux now dominates Azure
The most popular operating system on Microsoft's Azure cloud today is -- drumroll please -- Linux.
Three years ago, Mark Russinovich, CTO of Azure, Microsoft's cloud program, said, "One in four [Azure] instances are Linux." Then, in 2017, it was 40 percent Azure virtual machines (VM) were Linux. Today, Scott Guthrie, Microsoft's executive vice president of the cloud and enterprise group, said in an interview, "it's about half now, but it varies on the day because a lot of these workloads are elastic, but sometimes slightly over half of Azure VMs are Linux." Microsoft later clarified, "about half Azure VMs are Linux."
That's right. Microsoft's prize cloud, Linux, not Windows Server, is now, at least some of the time, the most popular operating system. Windows Server isn't going to be making a come back.
You see as Guthrie added, "Every month, Linux goes up."