In the 21st Century curiosity will SKILL the cat.
thinking is interactions of brain and body with the world. Those interactions are not evidence of, or reflections of, underlying thought processes. They are instead the thinking processes themselves.
Edwin Hutchins - The role of cultural practices in the emergence of modern human intelligence
One avenue through which meta-awareness might impact well-being lies in its relation to mind wandering. Mind wandering has been found to consume as much as 50% of our waking life and is tied to our sense of well-being. If training in attentional forms of meditation does strengthen meta-awareness, we might expect this to impact both the incidence and impact of mind wandering. Recent studies have found that meditation training alters patterns of task-unrelated thought, showing that even brief trainings in mindfulness meditation decrease the behavioral indicators of mind wandering. Although meta-awareness and self-referential processes are difficult to operationalize, a few recent studies seem to indicate that brain regions associated with self-referential processing, such as the medial prefrontal cortex and the posterior cingulate cortex, may be downregulated by mindfulness related practices.
Reconstructing and deconstructing the self: cognitive mechanisms in meditation practice
Virtual assistants have also received a boost from major advances in subsets of artificial intelligence known as machine learning and natural language processing, or the ability for computers to understand speech. Accuracy of word recognition reached something of a tipping point in recent years, going from 80 percent in 2009 to 95 percent in 2014, said Christopher Manning, a Stanford computer science professor and natural language expert.
The rise of this technology is evident in a wave of new jobs at the intersection of human and artificial intelligence. By 2025, 12.7 million new U.S. jobs will involve building robots or automation software; by 2019, more than one-third of the workforce will work side by side with such technologies, according to Forrester Data.
The next hot job in Silicon Valley is for poets
So what followed sovereignty was technocratic order. Once some borders got hashed out, and some alliances broke down, you can have universe bureaucrats come in and actually run some things.
So I think sovereignty allowed unique types of player alliances to form, which turned it less from a game about one-off battles into these systems where you’re watching huge, top-to-bottom logistics operations be in place, behind the battles, to reinforce the lines. And then to run those, you need very impressive leaders, who are going to give speeches to rally the workers to continue building ships or go out on the front lines of a fight that is relatively futile. And that was fascinating.
...It’s one of those things that you find when you start looking at EVE Online for long enough—the people who play, and the people who succeed at the game, are genuinely brilliant. When you talk to the leaders who run these organizations, you’ll ask them casually while a call is wrapping up, so what do you do for a living? And this one woman said, oh, I run a nuclear reactor up in Portland. Another guy was like, oh, I run an international shipping and logistics company.
And that has been borne out by some of the leaders in the game’s history—they’re so busy running their alliances, they don’t have time to log in and actually play the game. So they don’t. They’re playing the game through Google Docs, spreadsheets, IRC channels. They’re holding meetings with all the other leaders; they’re having diplomatic conferences in chat rooms.
Leaders start to figure out that they don’t have to fight their enemies. And if they don’t, they can start to chip away at their enemies’ morale because those players have showed up to fight. They’ve showed up to participate in a battle. So if you don’t give them that battle, if you don’t show up unless you absolutely have to—in the community they would call it “blueballsing” the enemy fleet. These players have showed up, they’ve given up six hours of their Saturday, and they didn’t even get to fight. And that has profound implications for the morale of a fleet, how well they’ll listen to their commanders, and whether you can get those players to show up next time.
How to Write a History of Video Game Warfare - Eve Online
a project undertaken or a product built not solely to fulfill some constructive goal, but with some wild pleasure taken in mere involvement, was called a “hack.”
This latter term may have been suggested by ancient MIT lingo— the word “hack” had long been used to describe the elaborate college pranks that MIT students would regularly devise, such as covering the dome that overlooked the campus with reflecting foil. But as the TMRC people used the word, there was serious respect implied.
While someone might call a clever connection between relays a “mere hack,” it would be understood that, to qualify as a hack, the feat must be imbued with innovation, style, and technical virtuosity.
Shaving off an instruction or two was almost an obsession with them. McCarthy compared these students to ski bums. They got the same kind of primal thrill from “maximizing code” as fanatic skiers got from swooshing frantically down a hill.
So the practice of taking a computer program and trying to cut off instructions without affecting the outcome came to be called “program bumming,” and you would often hear people mumbling things like “Maybe I can bum a few instructions out and get the octal correction card loader down to three cards instead of four.”
The Tech Model Railroad Club
The first quote in this Friday Thinking - refers to an emerging theory of embodied cognition such that thinking emerges from culture-environment entanglement. The digital environment will not only enable us to have ‘new senses’ but extend those ‘real senses’ throughout this digital environment. During a Twitter conversation with Karl Schroeder - he coined the term ‘distributed realism’. Rather confusing ourselves with fragmented slices of the future with terms like ‘virtual’ ‘augmented’ ‘mixed’ reality - or notions of AI-ssistants as intelligent amplification (IA) we could refer to ‘Distributed Reality’ (DR). This brings very significant new ‘percepts’ into possibility - with even more affordances - and perhaps most significant challenges many current conceptions of what a ‘mind’ is.
I know one industry almost every adult knows about is very interested in the development discussed in this article - this should be a must read - for anyone interested in the future of ‘digital connectivity’.
“A fundamental question in this project is what role haptic stimuli play in perception,” says Ernst. With the term ‘haptic stimuli,’ the cognitive scientist is referring to the sensations that arise from touch. “A special feature of our finger pads is that they are fleshy – they can ‘deform’ by giving way when touching something,” says Marc Ernst. For instance, when a person touches a sponge, she feels its composition and consistency through the tactile sensors in her skin.
Finger Fooled: New Perceptual Illusion Discovered
Fingers are a human’s most important tactile sensors, but they do not always sense accurately and can even be deceived. Researchers at the Cluster of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University demonstrated this in a new study in which they ‘outwit’ human perception. Test subjects placed their index finger in an apparatus and touched an object whose softness changed at random without the person noticing. While touching the object, the test subjects were under the illusion that it was the position of their finger that changed, not the softness of the object. The curious thing here was that the test subjects felt an “illusory” finger displacement, much larger in extent than the actual, “real” displacement. The researchers published their findings this Thursday, 7 April in the scientific journal “Current Biology.”
This is another must read/view Nature article - if only for the infographics.
A world where everyone has a robot: why 2040 could blow your mind
Technological change is accelerating today at an unprecedented speed and could create a world we can barely begin to imagine.
In March 2001, futurist Ray Kurzweil published an essay arguing that humans found it hard to comprehend their own future. It was clear from history, he argued, that technological change is exponential — even though most of us are unable to see it — and that in a few decades, the world would be unrecognizably different. “We won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate),” he wrote, in ‘The Law of Accelerating Returns’.
Fifteen years on, Kurzweil is a director of engineering at Google and his essay has acquired a cult following among futurists. Some of its predictions are outlandish or over-hyped — but technology experts say that its basic tenets often hold. The evidence, they say, lies in the exponential advances in a suite of enabling technologies ranging from computing power to data storage, to the scale and performance of the Internet (see ‘Onwards and upwards’). These advances are creating tipping points — moments at which technologies such as robotics, artificial intelligence (AI), biology, nanotechnology and 3D printing cross a threshold and trigger sudden and significant change. “We live in a mind-blowingly different world than our grandparents,” says Fei-Fei Li, head of the Stanford Artificial Intelligence Laboratory in California, and this will be all the more true for our children and grandchildren (see 'Future focus').
Here’s a 2 ½ min video that graphically presents the current state of the world. Worth the view. Data sources are listed under the video.
If The World Were 100 People | GOOD Data
Published on 14 Mar 2016
If the population of the world was only 100 people, what would society look like?
Produced and Written by Gabriel Reilich
This is an open-call from the folks who support the Creative Commons license.
Help Us Build Creative Commons Certificates – Open Community Call
With Creative Commons now being used by people all over the world to openly license over a billion pieces of content, a good working knowledge of what Creative Commons is and how it works is critical.
Creative Commons is developing a series of certificates to provide organizations and individuals with a range of options for increasing knowledge and use of Creative Commons.
The Creative Commons Master Certificate will define the full body of knowledge and skills needed to master CC. This master certificate will be of interest to those who need a broad and deep understanding of all things Creative Commons.
In addition, custom certificates are being designed for specific types of individuals and organizations. Initially Creative Commons is focusing on creating a specific CC Certificate for 1. educators, 2. government, and 3. librarians. The CC Certificate for each of these will include a subset of learning outcomes from the overall CC Master Certificate along with new learning outcomes specific to each role.
This is a must see 20 min TED Talk - what is truly amazing is the office from which Linus guides the world’s greatest open source software. How he doesn’t really ‘love’ other people - but has come to love people. :) Both Linux and Git were projects that Linus needed so he wouldn’t have to work with lots of people. He says, “I’m not a visionary, I’m an engineer - I’m not looking at the star, I’m concerning with fixing the pothole ahead of me before I fall in’.
Linus Torvalds: The mind behind Linux
Linus Torvalds transformed technology twice — first with the Linux kernel, which helps power the Internet, and again with Git, the source code management system used by developers worldwide. In a rare interview with TED Curator Chris Anderson, Torvalds discusses with remarkable openness the personality traits that prompted his unique philosophy of work, engineering and life. "I am not a visionary, I'm an engineer," Torvalds says. "I'm perfectly happy with all the people who are walking around and just staring at the clouds ... but I'm looking at the ground, and I want to fix the pothole that's right in front of me before I fall in."
Where do ‘hackers’ come from? There is a lot of public perception of ‘hacking’ as nefarious willful interference - rather than ‘willful tinkering to understand and make things better’. This is a nice article (19 min read) giving some history to the original ‘hackers’.
LOGIC ELEMENTS: the term seems to encapsulate what drew Peter Samson, son of a mill machinery repairman, to electronics. The subject made sense. When you grow up with an insatiable curiosity as to how things work, the delight you find upon discovering something as elegant as circuit logic, where all connections have to complete their loops, is profoundly thrilling. Peter Samson, who early on appreciated the mathematical simplicity of these things, could recall seeing a television show on Boston’s public TV channel, WGBH, which gave a rudimentary introduction to programming a computer in its own language. It fired his imagination: to Peter Samson, a computer was surely like Aladdin’s lamp—rub it, and it would do your bidding.
The Tech Model Railroad Club
The first computer wizards who called themselves hackers started underneath a toy train layout at MIT’s Building 20
Just why Peter Samson was wandering around in Building 26 in the middle of the night is a matter that he would find difficult to explain. Some things are not spoken. If you were like the people whom Peter Samson was coming to know and befriend in this, his freshman year at the Massachusetts Institute of Technology in the winter of 1958–59, no explanation would be required.
Why are creative commons so important - no serious innovation can happen when all past creative works are behind a paywall. If we want to unleash human creative capacity people need to be able to access creative content to re-combine and rework for new creation. There are real alternative to current business models the require everything to be artificially made scarce or rival. Imagine Uber - but with every driver also an owner - the distributed ownership of Uber - would represent the sharing economy.
Record numbers of self-employed enter new tax year… and the co-operative model is here to help
“Working alone can be aspirational, but it can also be lonely and anxious. There is an extraordinary opportunity for new co-operative solutions for self-employed people, giving them the freedom of freelancing with the muscle of mutuality.”
Cooperatives UK have just released an in depth report full of examples of best practices for co-operatives collaborating to meet the needs of a growing class of dispossessed workers – over 70% of whom in the UK are in poverty. We will cover various aspects of the report in the following days and you can also read the full report here.
This is definitely something to consider, including the potential that renewable energy of solar and wind have to change energy geo-politics and drive an economic-learning infrastructure for a digital revolution. But we in any part of the developed world have to re-imagine our digital infrastructure - beyond privatized access to the Internet.
How can Africa master the digital revolution?
Digital connectivity has the potential to do for Africa what railroads did for Western economies in the 19th century. The digital revolution is not just about communication. It is about recognizing that information is the currency of all economic activities.
Unfortunately, despite the rapid adoption of mobile phones, Africa lags behind other regions in its use of core digital platforms such as the internet. This is compounded by the high prices charged for critical digital services such as broadband.
This is a fantastic innovation - perfect for Canadian winters (heat) and creating a distributed electricity commons - plus these could serve as mobile and Internet hotspots - part of the future of the ‘smart city’.
London transparent about its new solar bus shelters
Dark dreary bus shelters could soon be a thing of the past after London's first transparent solar bus shelter was switched on at the Canary Wharf business district on Friday. The shelter is reported capable of producing enough electricity for a standard London home for a year and will be used to power signage and other transport infrastructure around the Canary Wharf estate.
London is no stranger to solar-powered bus shelters, having installed solar-powered E-ink displays last year. Transparent solar panels are also something we've seen before, but early prototypes only offered around 1 percent conversion efficiency. UK solar technology company Polysolar has developed a new solar-photovoltaic technology with a whopping 6-12 percent conversion efficiency, dependent on the film's level of transparency. This new design allows for a transparent shelter that operates in low and ambient light.
The price of identity theft - that is the price of selling an identity once it’s stolen has become a commodity.
The price of stealing an identity is crashing, with no bottom in sight
Crooks are also acting more like normal businesses. Eager to please customers in a competitive marketplace, Russian hackers have started promoting round-the-clock customer service and placing ads promoting “free-trial attacks” and their “huge abilities.” This year’s hit product were upgraded ATM “skimmers” that capture the magnetic stripe data on the back of a debit or credit card priced at $1,775. Dell even observed new versions being manufactured with 3D printers, along with bluetooth and micro-camera accessories. For the aspiring thief on a budget, hacking tutorials were at the lowest price in three years—just $19.99.
This may be a bit of hyperbole - but it really is coming to senior management and leadership cadres near you … soon. Now does that mean there will be less of a rationale to pay them the ‘big bucks’? Or that there will be less opportunity for malfeasance, and other vices - because ethical constraints can be coded and advice provided recorded?
• Exposing the long-term implications of short-term decisions. Computers can enhance systemic thinking, uncovering the far-reaching (and often unintended) consequences of executive decisions.
• Experimenting to uncover new sources of value. Computers could, for example, be used to simulate the impact of large events, such as the potential acquisition of a rival.
• Augmenting human judgment. Computers can free executives to do what they do best – exercise their business and ethical judgment. They can also help people avoid common decision-making traps, like the tendency toward groupthink.
How intelligent machines are helping the C-suite
Computers are not just passive decision-support tools. They are becoming active advisers and partners in the C-Suite
How should Volkswagen rebuild its brand image after the widespread damage inflicted by the recent emissions scandal? Should Apple have fought its encryption battle against the US government? Would Yahoo be better off divesting its core operations? These are the types of momentous decisions that keep senior executives up late at night, but help – in the form of advanced computers – is on it way.
Today, we are moving well beyond the era of computers being merely passive assistants. We are now entering a period when the collaboration between humans and intelligent machines will be a source of competitive advantage for businesses. Computers are now becoming active advisers and partners, and that includes their presence at the highest levels of an organisation. Which means the prospect of intelligent machines in the C-Suite is not as far-fetched as it might first seem. In a survey of 800 executives conducted by The World Economic Forum, almost half of the respondents said they expect the first AI machine to be on a company’s board by the year 2025.
- From Incrementalism to Active Experimentation
- The Shaping of Strategy
- No More Sacred Cows
- Reaping Maximum Benefits
Here’s is another industry facing looming disruption.
Death of a real estate broker: 10 ways the industry is changing
What do real estate brokers, brokerage clerks and telemarketers have in common?
They will all lose their jobs in the near future. According to researchers at Oxford University, the potential for artificial intelligence computer algorithms to replace these jobs is estimated at between 97 and 99%.
The technological disruption that is happening in the real estate sector will go far beyond taking away “what you can sell”; it is set to radically transform the real estate profession. Recent research indicates that it is not just brokers but the entire real estate industry that has to rethink how new technologies as well as shifts in demographics and behaviour will impact upon real estate jobs, skills and business models.
The real estate industry as we know it will disappear.
Intimately related to real estate is the financial and banking sector - what coming for this sector is certainly a ‘looming’ shadow - but it’s hard to guess how exactly things will play out. What will significant ‘job’ cuts entail? More inequality as few people at the top of finance harvest all the spoils? Or more equity? What makes it impossible to predict is that this an a domain that is highly regulated and the question for the future is not less regulation - but who shapes regulation to favor what/who’s interests? Let’s hope the Panama Papers contributes to initiation of a wave of progressive reforms.
What does the rise of fintech mean for banking?
European and US banks may be on the brink of an "Uber moment", as the explosion of fintech disrupts the industry and leads to massive job cuts over the next decade, a new report predicts.
Up to 30% of current employees in banks across Europe and the US may lose their jobs to technology by 2025, according to the report by Citigroup, which forecasts that around 1.8 million positions will go – mainly as a result of the automation of retail banking.
This really is a MUST SEE - a few graphs that illustrate the accelerating speed of technology-based change - This is not just the camera - but mobile technology written large.
This is What the History of Camera Sales Looks Like with Smartphones Included
A few months ago, we shared a chart showing how sales the camera market have changed between 1947 and 2014. The data shows that after a large spike in the late 2000s, the sales of dedicated cameras have been shrinking by double digit figures each of the following years. Mix in data for smartphone sales, and the chart can shed some more light on the state of the industry.
Photographer Sven Skafisk decided to see what the same chart would look like with smartphone sales factored in. Here’s the chart he came up with using data from Gartner Inc.
Intel has declared Moore’s Law dead - maybe and maybe not.
A $2 Billion Chip to Accelerate Artificial Intelligence
A new chip design from Nvidia will allow machine-learning researchers to marshal larger collections of simulated neurons.
The field of artificial intelligence has experienced a striking spurt of progress in recent years, with software becoming much better at understanding images, speech, and new tasks such as how to play games. Now the company whose hardware has underpinned much of that progress has created a chip to keep it going.
On Tuesday Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning.
At a company event in San Jose, he said, “For the first time we designed a [graphics-processing] architecture dedicated to accelerating AI and to accelerating deep learning.” Nvidia spent more than $2 billion on R&D to produce the new chip, said Huang. It has a total of 15 billion transistors, roughly three times as many as Nvidia’s previous chips. Huang said an artificial neural network powered by the new chip could learn from incoming data 12 times as fast as was possible using Nvidia's previous best chip.
The thing about more is that more can become different - and this takes us to Moore’s Law becomes different.
Bringing Big Neural Networks to Self-Driving Cars, Smartphones, and Drones
Engineers are trying to squeeze outsize AI into mobile systems
Artificial intelligence systems based on neural networks have had quite a string of recent successes: One beat human masters at the game of Go, another made up beer reviews, and another made psychedelic art. But taking these supremely complex and power-hungry systems out into the real world and installing them in portable devices is no easy feat. This February, however, at the IEEE International Solid-State Circuits Conference in San Francisco, teams from MIT, Nvidia, and the Korea Advanced Institute of Science and Technology (KAIST) brought that goal closer. They showed off prototypes of low-power chips that are designed to run artificial neural networks that could, among other things, give smartphones a bit of a clue about what they are seeing and allow self-driving cars to predict pedestrians’ movements.
Until now, neural networks—learning systems that operate analogously to networks of connected brain cells—have been much too energy intensive to run on the mobile devices that would most benefit from artificial intelligence, like smartphones, small robots, and drones. The mobile AI chips could also improve the intelligence of self-driving cars without draining their batteries or compromising their fuel economy.
Did I just say Moore’s Law becomes Different?
Scientists just made the world's smallest diode out of DNA
Electronics on the molecular scale.
Researchers have shrunk down one of the fundamental components of modern electronics, creating the world's smallest diode out of a single molecule of DNA. In fact, it's so tiny, you can't even see it using a conventional microscope.
Diodes are electronic devices that make it easy for current to flow in one direction, but not another. In other words, they're responsible for moving current around a lot of common electronics, and are printed by the millions onto modern-day silicon chips. But to increase the processing power of these chips, we need to make diodes a lot smaller, which is where DNA comes into it.
"For 50 years, we have been able to place more and more computing power onto smaller and smaller chips, but we are now pushing the physical limits of silicon," said lead researcher Bingqian Xu, from the University of Georgia. "Our discovery can lead to progress in the design and construction of nanoscale electronic elements that are at least 1,000 times smaller than current components."
And talk about Google Glass - remember that? This is not in production - but….. Here comes the participatory panopticon...
Samsung Patents Contact Lenses With Built-In Camera
Samsung has been granted a patent in South Korea for contact lenses with a display that projects images directly into the wearer’s eyes.
According to SamMobile, the patent includes a “contact lens equipped with a tiny display, a camera, an antenna, and several sensors that detect movement and the most basic form of input using your eyes: blinking.”
A smartphone will be required for the device to work, according to documents.
The “smart” contact lenses could prove to be a substantial upgrade from so-called “smart glasses”, posing a threat to what will be its main competitor in the market, Google Glasses.
This isn’t 3D printing as manufacturing but it points to new ways to produce local drugs less expensively. It part of the change in the conditions of change.
Pharmacy on demand
New, portable system can be configured to produce different drugs.
MIT researchers have developed a compact, portable pharmaceutical manufacturing system that can be reconfigured to produce a variety of drugs on demand.
Just as an emergency generator supplies electricity to handle a power outage, this system could be rapidly deployed to produce drugs needed to handle an unexpected disease outbreak, or to prevent a drug shortage caused by a manufacturing plant shutdown, the researchers say.
“Think of this as the emergency backup for pharmaceutical manufacturing,” says Allan Myerson, an MIT professor of the practice in the Department of Chemical Engineering. “The purpose is not to replace traditional manufacturing; it’s to provide an alternative for these special situations.”
Such a system could also be used to produce small quantities of drugs needed for clinical trials or to treat rare diseases, says Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering at MIT.
Traditional drug manufacturing, also known as “batch processing,” can take weeks or months. Active pharmaceutical ingredients are synthesized in chemical manufacturing plants and then shipped to other sites to be converted into a form that can be given to patients, such as tablets, drug solutions, or suspensions. This system offers little flexibility to respond to surges in demand and is susceptible to severe disruption if one of the plants has to shut down.
Many pharmaceutical companies are now looking into developing an alternative approach known as flow processing — a continuous process that is done all in one location. Five years ago, an MIT team that included Jamison, Jensen, and Myerson demonstrated a larger prototype (24 by 8 by 8 feet) for the continuous integrated manufacturing of drugs from chemical synthesis to tablets. That project has ended, but the continuous manufacturing initiative, funded by Novartis, is still underway as the researchers develop new methods for synthesis, purification, and formulation.
This is very interesting progress in the 3D printing.
3D-printed hydraulic robot 'can practically walk right out of the printer'
The bot's creation shows how 3D printing can advance from making individual components to whole active systems.
Researchers from MIT have used a new 3D-printing method that works with both solids and liquids to create a six-legged, hydraulically-powered robot. The team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) created the bot using a commercially-available 3D printer. Several sets of tiny bellows power the robot's legs, and are filled with liquid deposited during the 22-hour printing process.
CSAIL director Daniela Rus said in a press release that the research was a step towards "the rapid fabrication of functional machines." She added: "All you have to do is stick in a battery and motor, and you have a robot that can practically walk right out of the printer." The resulting bots weighs about 1.5 pounds and is just under six inches long. Future applications for such cheap robots could include exploring disaster sites where humans cannot easily go.
This is totally fascinating - this should be a MUST READ for all social scientists - the 21st century is the century of complexity, the domestication of DNA & matter and mass migrations to virtual worlds - and who knows what else? This article reveals a new domain for research - the history of virtual worlds. Equally important in understanding these Massive Multiplayer Online Games - is that they are new research platforms for social experimentation on all levels - something all social scientist should become familiar with - affordances for new research tools, methods and domains.
“It’s one of the first times we can point to something and see an ideologically fueled conflict taking place on the Internet between tens of thousands of networked individuals.”
“They don’t view it as a game. They view it as a very real part of their lives, and a very real part of their accomplishments as people.”
At the end of the day, no one is doomed to work for any alliance. They can, at any time, pick up their bag and leave. So inspiration and morale become these huge resources for these organizations, and that’s fun, because it means propaganda starts to matter. I have examples of these wonderful propaganda speeches, where the leaders of these alliances start talking like they’re Winston Churchill. And you get the very real sense they believe it.
Would it be ‘unimaginable’ to think of ‘real’ conflict occurring in virtual domains? The concepts of distributed realism enables a massive participatory platform generating embodied sensation, meaning, implications.
I got to know EVE because when you’re just in the games community, when you follow games as a niche medium, you tend to hear these tip-of-the-iceberg tales about what goes on in EVE. Whispers get around of this battle that had 4,000 players in it, or this incredible Ponzi scheme someone ran that duped 10,000 people throughout the game’s history.
How to Write a History of Video Game Warfare
A journalist has assembled the first chronology of the largest war yet fought on the Internet—the Great War of EVE Online.
Imagine the Star Wars universe is real, and you live in it.
You have lived in it, every day, for the past 13 years. You’re the captain of a nimble fighter, working for money or sheer thrill; or you operate a modest mining ship, plying your trade in the nearest asteroid belt. For all that time, you have been scraping by in the final frontier—evading pirates, settling scores, and enduring the Machiavellian rise and fall of space empires. Sometimes you even pledge allegiance to a warlord.
For the more than 500,000 people who play EVE Online, this isn’t a fantasy. It’s real life. EVE Online is a massive multiplayer online game—a single environment shared by thousands of players, like World of Warcraft or Second Life—that has been in continual operation since 2003. It contains all the set pieces of space opera—moons, distant outposts, mighty dreadnoughts—but it is no ordinary video game. In fact, it is like little else on the Internet in its ability to mirror the functioning complexity of the real world.
No comments:
Post a Comment