A Stadiaster

That title isn’t even punny. Yeah I know I’m bad at making jokes and people tend to take me all seriously whenever I make one on the blog, which is why I stopped doing them long time ago (or did I?) I didn’t follow Stadia’s launch per se, but news and people going on about the whole shebang just crept through the grapevines. I couldn’t help but feel slightly sorry for people who got Stadia, but this should also teach people that corporate speech is never to be trusted. While Stadia hasn’t been a complete disaster, it’s damn close to it.

From what I’ve been told, the lag is present in almost every game to a stupidly extreme degree. Button presses are recognised whole seconds after the fact, and some games simply stutter and play slow like you’re in Nino Island Ruins in Mega Man Legends 2, just without the watery ripple effect. The instructions for Stadia recommends cutting everything off in your Internet usage while playing Stadia, including streaming music. It’s also recommended to connect it directly to the router rather than through computer or any other device’s WiFi. There are some additional helps people have found out, but it’s all really to make sure the Stadia has all the bandwidth. Not just some, but all it can have. When I called out Google’s bullshit that it’d do 60FPS 4K in a perfect manner and said nobody really has the speeds or connections to get games running in that quality, I knew people would be bewildered when their games would run terribly. Never trust corporate word, it’s meant to promote and sell, not to be truthful.

That should be few nails in the coffin for Stadia, but that’s just the game side of things. People haven’t got their codes, some have been missing their devices and Google’s own support isn’t even in the know about Stadia. Sure, Google’s a big million dollar company and not everything part of it can be made aware what sort of things the other is doing, but support should really be informed that this kind of product is coming and these are your instructions. This should show that Stadia’s launch very much a rushed thing, that Google barely had any time to put together proper documentations internally and did not prepare what was to come. I bet your ass they know well enough how badly everything would go, but hype will carry anything through. Now that they’re getting real-world test data from existing users, they can start tweaking stuff properly. While not standard, it isn’t unusual for a company to use early adopters as testbeds and beta testers. The “real” launch of Stadia will probably be sometime next year after they’ve further tweaked and fixed stuff, and when that supposed Freemium model of some sort gets launched.

Of course, when you fail at what you intended to do, you can always throw in identity politics and claim some brownie points through that. In an interview with CNN Business, Google VP and head of Stadia Phil Harrison claimed that Stadia is targeting women with Stadia controller. Here’s the archive link for it. If I’m being honest, this is load of bullshit. Stadia controller looks like a generic Chinese knock-off controller you can sometimes see being sold on eBay or other places, it looks like a blander version of the Xbox 360 controller. Controllers in multiple colours has been a thing since at least Commode 64 days, where you could find joysticks and other devices in different colours. Most often something neutral or targeting the pre-existing user group was offered, because those sell. The design director Isabelle Olsson claims that the wasabi colour they went with has universal appeal. There are vast amounts of colours that have universal appeal. Anything pale that’s close to white of course would have universal appeal, as it doesn’t make a strong statement for a direction or another. It’s like vanilla; it goes well with everything and nobody really fights against the taste. CNN Business claiming that it’s slightly easier for small hands to grip than similar products put out by rivals is nothing short of bullshit. All modern controllers that use the handle-grip design have to be designed to fit standard hand dimensions. The overall shape has to be different due to the patents and copyrights, but in recent memory there is only one controller that was intentionally designed to fit larger dimensions than the global standard, Xbox’s The Duke. Claiming that they’re targeting women with these design choices is laughable. It’s nice to say this, when in reality your product is aiming to become a success with general audiences and not just part of it.

Of course, Harrison also mentioned how they don’t have the baggage of pre-existing gamer culture, a thing that’s absolutely false. Whatever they actually mean by gamer culture is well up to debate (long-time readers know that “gamer culture” and its history stems well back to 1800’s and back at least), but you can’t escape the market pressure and demands if you intend to enter a market and succeed there. Stadia may not have history attached to it, but that’s just normal. No new product has a history attached to it, but at the same time, all the pre-existing games that were attached to Stadia bring their history and culture to the platform. Of course, this means Stadia can be the best of the best for a time being, before its core consumer base sets in, but at right now Stadia has more infamy to it than any other platform. Harrison and the rest of the staff that decided on the whole women-centric and sex-neutral marketing have undermined their supposed attempt by bringing in old games that are very well marred in this culture they don’t want to carry. It is extremely haughty to claim you’re targeting an audience that isn’t being catered to, when the world is full of options and readily-catering products. That’s PR for you, throwing out ideas of what you’re doing for the sake of making that sliver more sales. I guess that’s the angle Google has to take with Stadia on the outside to make them stand out from the competition, when their model of service isn’t meeting up with the wants and demands of the audience, targeted or not.

Another Epic PR disaster

When the Epic Game Store came around the first time, I considered it an addition to the whole economy of digital games stores. There’s always more room to challenge Valve, GOG and the rest as long as the service is right, the price it tight and products stand out. The last bit Epic has been working on overtime, but not the way most consumers would want. Its not that Epic has put studios to work for unique games, but they’ve been doshing dough around like no other, picking up games off from developers from Patreon, Kickstarted products and such. Kickstarted products is the sore point, as many were promised either physical PC release or a Steam key, but with Epic bringing its bang to the table, these promises turn empty and they’re given Epic codes instead. While Kickstarter is not a store and changes are always going to happen, keeping tight on your delivered products. When things are like this, you need some good PR management skills to handle the situation. Ok, let’s be realistic; you need someone with excellent PR skill and background to manage the consumers and dampen all the possible damage. You never go in head first yourself, because you don’t have the skills or knowhow. You’d be an idiot to assume that consumers of any sort are a kind bunch. Outside already promised products e.g. via Kickstarter changing their form and direction, in principle there’s nothing wrong in Epic’s way of making exclusives. Personal opinion doesn’t exactly matter, when the majority has made their negative view on the platform rather vocal.

Consider why each and every successful corporation, company or individual businessman has a front while everything happens behind the curtains. That is to keep the consumer at an arm’s length away to keep some details behind the curtain while having proper discourse with the customer.

You probably already know ins and outs how Ben and his wife Rebecca have been working on a game titled Ooblets and how it became a timed-exclusive for Epic Store. I didn’t know about them two days ago, and apparently not many others had either. Still, Ben doesn’t mention his last name or sign with full title, so I’m going to call him just Ben, uncharacteristically. Sorry Benjamin, don’t mean to mix you with this Ben. After Ben announced the situation, he and his wife got some heavy backlash, which should have been completely expected considering how negative reception Epic has. Of course, being Ben he went on to Medium and wrote a long response. Archived version for your pleasure. We’re mostly going to concentrate on this, but you can jump on their Discord if you want to read how easily Ben is willing to take a shot at people for whatever reason. OneAngryGamer has some of them archived, just like his article is.

It really is largely trite to read through, as anyone who have followed any standard events regarding production of games from the start within the indie scene should know, especially the title has been Kickstarted. Most interaction with fans is positive, until you fuck up somehow. When you fuck up, that brings in the rest of your silent backers and other potential customers in like a lightning rod. Ben describes how their style has been jolly and non-serious all this time, which is the first error most of these independent creators do, because that means nobody can never really trust their info without analysing through the bullshit you’re spouting. Having a joke here or there to break the ice is great, but being tongue-in-cheek as your standard style of interaction is about as welcome as a rash on your ass. Sure its colourful and gives you attention, but in the end you want that clear and fresh feeling instead.

The Internet is nothing new when it comes to mad people. It is a misconception that the Internet brought us some sort of new era of hate messages or the like. No, hate mail has always existed. Before direct messaging and emails, people used letters published in news papers or sent directly to the provider, or simply calling by phone. The Internet just has democratised who and how they are able to voice their opinion. Ben listing some examples of people going over the board does show that there are people either genuinely mad, or that there are just people wanting to pitch in for good time’s sake. Neither really is constructive, but emotions tend to take over people very easily.

Ben makes clear that he doesn’t consider anyone a customer. He or his wife hasn’t sold anything to anyone, so there isn’t a provider-consumer relationship. He’d be wrong. The relationship that exists between the two and their audience is potential consumer base, which has effectively become their fanbase that they were nurturing. In the face of law this is the case, he can argue that. However, considering he team has a Patreon that is directly about funding the game. Still, they don’t offer any of the game there, just some merch when they begin to produce it. Maybe.

However, when you have a fanbase and interact with and constantly update them on your progress, you have a group of people you have cultivated as your main consumer base. There is a certain silent agreement between you and this group of people about a transaction and this has been going on for three years. If Ben thought for a moment that there wasn’t meta-transaction on an emotional level going on, he has been sorely mistaken. He can call people entitled all he wants or whatnot, but do remember that when you are promising a product to fans, and have given your word (despite this not being a binding contract), you’ve already made emotional connections and managed to tie the future consumer of your future product to your brand. That tongue-in-cheek nature nature of messages and updates is an element that backfires twice as worse in situation like these, as that tone is often seen as facetious and deceptive. At best it’ll be regarded as condescending, though often that’s the underlying tone. There has been implied promises going on for three years. Morally speaking, Ben and his wife do owe to these people. Furthermore, they owe their very current monetary situation and success to their fans and especially to their patrons.

Ben admits he has a PR disaster in his hands. Yet he blames this on a portion of gaming community rather than acknowledging  his own fuck-up. His business sense overrode the work he had done with his PR, where Epic’s offer for a timed-exclusive seemed a better option over long-term positive feedback. Even my sorry ass has heard enough tales of consumers and fans getting riled up over developers and publishers being swayed by Epic’s bucks. Any and all devs at this very moment should ask themselves Is my fame more worth than the money I’m currently offered? Hell, I’ll even argue that if a dev now would make a bold announcement that they have rejected Epic’s offer for exclusivity in favour if fans’ and consumers’ preference in a proper way, they’d be hailed, in words of an Australian, as fucking heroes.

If you screw your PR like this and make widely unpopular move all the while taking a good shit on people who could have been customers, then still proceed to take numerous dumps on people, belittling people, don’t go cry over a massive backlash. While regrettable, it is also the harsh truth of business and maintaining your image. Ben’s and Rebecca’s first ride on the PR train and it getting off the tracks was, ultimately, their own doing. A reaction always requires something to start it going. Just to make sure, I didn’t say they deserve getting the worst of the rap that’s raining on them, but they are the source of this reaction, which could have been mostly avoided. Not the way Ben and his folks were maintaining their interactions though.

This whole deal shows basic lack of consumer research and expectations evaluation. Both PC and console consumers have been vocal about Epic’s misgivings and even more about how the developers and publishers seem to have lost all contact with the people who buy their stuff. I shouldn’t underline the bottom line with this repetition, but as a provider, albeit as one who has not yet delivered one product, everything hangs on the people who are willing give you money. Now, with their decision to handle things like this, not practicing good sense and proper manners when interacting with audience and not clowning around, they’ll probably see less success and a very tarnished reputation. That’ll take some polishing to fix.

Providers aren’t your friend. They’re in the field to get paid. Directly interacting with them won’t change this, no matter what sort of relationship and emotional connection you have with them.

Heads in the clouds

Cloud gaming making some waves again, with Sony and Microsoft announcing collaboration with each other to explore solutions with their own streaming solutions. At least according to official statement from Microsoft. Despite being rivals within gaming market. We should always remind ourselves that out of the Big Three, only Nintendo deals exclusively with games. Both Microsoft and Sony have their fingers spread elsewhere, with Sony having movie and music studios, Microsoft with Windows and whatnot and so on. While Sony does rely heavily on the profits their gaming department is making (to the point of relying most of their profits coming from there seeing everything else has been going downhill for them), Microsoft doesn’t as much. I’m not even sure if Microsoft is still making any profit on their Xbox brand and products, considering neither the original box or the 360 saw any real profit throughout their lifespans. It’s like a prestige project for them, they gotta have their fingers in the biggest industry out there. The more competition, the better though. This does mean that neither Amazon or Google can partner with Sony for similar venture, but perhaps this was more or less a calculated move on both of their parts.

It does make sense that the two would collaborate to support each other in cloud and streaming venture though. Sony already has an infrastructure for streaming gaming content with their PlayStation Now while Microsoft has the whole Azure cloud centre set up. The MS Azure contains lots of features, from computing  virtual machines and high density hosting of websites, to general and scalable data management all the way to media streaming and global content delivery. Safest bet would be that both MS and Sony are intending to share their know-how of content streaming, but it is doubtful if the two will actually share any content. Perhaps Sony’s music and films will be seen on Microsoft’s services, but don’t count on the games. However, I can’t help but guess if multiplatform games between the two could be specifically designed and developed for their combined streaming efforts. That’s a bit out there, as the collaboration is to find new solutions rather than build a common service the two would use. This is, like Satya Nadella said, about bringing MS Azure to further power Sony’s streaming services, and that’s completely different part of market from games at its core.

This does seem like Enemy-of-enemy like situation. Google’s Stadia is touted to be the next big hitter on the game market. It’s not unexpected for the two giants pull something that would weaken Stadia’s standing. This, despite Stadia already having boatloads of obstacles already, ranging from control latency to the quality of the streaming itself (end-user Internet connection still matters, especially if you live in the middle of nowhere surrounded by dense forests) to the very content itself probably being less than unique. Let’s not kid ourselves, cloud gaming is not for everyone despite what Google’s PR department wants you to think. Not everyone has the money or infrastructure to have a proper connection for cloud gaming. Anecdotes be damned, but there are lots of people living around here who have to rely on wireless Internet for everything, especially up North, because the population is so spread apart that putting data cables into the ground would not be worth it. Early 2000’s modem speeds are not unexpected, they’re a standard. If early reports on Stadia are to be believed, there’s some serious lag and latency on standard Internet connections. It’s not going to play well with someone who doesn’t put a whole lot money into their Internet connection, or just can’t. If we’re going to be completely open about this, only a fraction of the world can handle cloud gaming. 10.7 teraflop computing power and 4K resolutions for Stadia? A pipe dream at best.

Steaming interactive content like video and computer games is not easy. Music and video, that’s comparatively easy, just send that data to the consumer and you’re pretty much done. Gaming requires two-way communication at all times, and on top of that the service has to keep tabs on what’s going on at both ends within the game. No matter how robust the data centres are, no matter what sort of AI solutions are implemented, it all comes down to the whole thing about latency between the data centre and the end-user. Perhaps the best solution would be split the difference in a similar manner how mobile games have partial data on the phone whole syncing with the server side all the time. That, of course, would be pretty much against the whole core idea of cloud gaming, where the end-user would just hold an input device and a screen.

Cloud gaming has been tried for about a decade now. It’s still ways off, but it’s very understandable from the corporations’ perspective why they’d like it to become mainstream and successful. For one, it would remove one of the biggest hurdles from the consumer side; getting the hardware. You could just use your existing computer or smartypants phone to run things and you’re set. Maybe have a controller, but you can get those for twenty bucks. No need to pay several hundreds for a separate device just to run separate media software. Cloud gaming would be the next step in digital-only distribution, which would also offer better protection from piracy. Control is the major aspect of cloud gaming, where the end-user would have effectively none. You would have no saying in what games you have access to. One of the well marketed modern myths about streaming services is that everything is available 24/7, when in reality everything is determined by licenses. Star Trek vanished from Netflix for a time being, because the license ended, for example. This happens all the time. I’m sure there’s some list of lost media listing somewhere about digital-only films and shows that were lost due to publishing rights and licenses expiring. Lots of games having vanished from both Steam and GOG because of this, and if there are no physical copies floating around, pirating is your only option. For something like the Deadpool game, you can only get second-hand or newold stock, as the developer’s and publisher’s license expired few years back.

Will cloud gaming be the future? Probably at some point, but the infrastructure is way off still for it to become any sort of standard. It is, in the end, another take on the decentralised gaming Nintendo has going on with the Switch, moving away from the home media centre that the smartphones brought to us. Cloud gaming will take take firmer hold once they beat systems with local storage in value and performance. For now, enjoy the screen in your pocket.

The price of production

End users very rarely think about the production of consumable and usable goods. Why should they, it doesn’t exactly touch their daily lives to any meaningful extent outside the price of the product and the environmental impact it causes, but outside that nobody really thinks things like how their forks have been produced. Even before you get to smelting and consuming the raw materials, whatever company procured the materials had to have their own equipment to obtain the metal, most likely via some sort of mining operation, which leads to the whole cycle of obtaining the materials, all the plastics and metals, to produce the necessary equipment. It is practically impossible for a general consume to ever know where and how their products have been sourced and from where. Many companies make big promises for ethical treatments of workers or environment, often both. Fairtrade is one of the prominent examples of this, with issues ranging from low pay for coffee to less money ending up to the growers themselves. The growers outside Fairtrade make three to four times more money by selling outside Fairtrade, whereas less than 12% of any of the money made from Fairtrade products ends up going back to the source despite the significantly more expensive price tag products under this brand are sold in. Fairtrade themselves claims the price is justified due to the high quality of their products, though that seems to be less the case the more you look into habits of hardcore foodies. Things like premium coffee markets were expanding in the 2010’s, and Fairtrade’s didn’t seem to meet with the quality. Olivier Riellinger of Les Maisons de Bricourt said it best, when he described the whole Fairtrade scheme neo-imperialistic that is being imposed on growers. However, Fairtrade continues to succeed to an extent with their branding of ethics and practices.

Let’s use another example, where production of something is completely ignored due to the perceived and argued value of the usable good; electric cars in Germany. The Brussels Times recently wrote that a German scientist had found out that electric vehicles in Germany cause more CO2 emissions than diesel cars. You might be wondering how this would be possible, as electric cars don’t really have CO2 emission. This study found that electric cars, despite their perceived position as an environmental saviour, ultimately cause further emissions due to the source of that power. The power needed to charge these cars comes from power plants, and in Germany they are phasing out the greenest and cleanest form of energy production; nuclear power. Each nation that is phasing out nuclear power in favour of alternative methods means either coal or far weaker form of energy production, and ultimately releases less radiation to the environment than the alternatives. Richard Rhodes has an excellent opinion piece on the subject that I would recommend reading.

While the history of nuclear power has its spots, so does every other form of energy. However, in most of these cases human neglect and lacking procedures have caused the most damage. In Chernobyl, the combination of old, inefficient Soviet nuclear tech and carelessness caused the meltdown. Fukushima Daiichi too was to be refurbished and upgraded many times before that fatal earthquake, but lobbyists and anti-nuclear power movements prevented this, ultimately leading Fukushima’s reactors and facilities to be out-of-date. If they had been upgraded when they were needed and indeed were supposed to originally years prior, Fukushima’s incident would have been avoided. The fact that it is cheapest, cleanest and most efficient power source we have makes every charge we do outside nuclear power damage the environment.

What do we charge? Mobile phones, portable torches, mp3 players, other mobile devices, e-readers, electric cars and so on. Everything runs on batteries, and mining that those metals and minerals; lithium, cobalt, manganese, iron, copper and hematite just to name few, takes energy in itself, often oil and coal powered. These materials are mined in massive amounts, and the insanely large amounts that are produced makes their end price as low as five buck a pack of ten AAA batteries. Then take the amount of chemicals these products require, from surface paintings to the adhesives and plastics parts used inside, and you have more materials required to be produced and assembled through hundreds of different hands.

To use another example, solar panels themselves are considered very environmentally friendly source of energy. Yet this discussion almost always omits the copious amounts of quarts that is required to be mined in order to turn that into silicon in furnaces that emit sulfur and carbon dioxides in large amounts as well as have large amounts of wasted heat. Let’s not forget all the particle pollution this causes. Then you have all the chemicals that are required and produced during the production of the both prepare and wafer the silicon for the panels themselves. Second issue of course is the panels themselves, or rather, the shadows they cast. If a solar panel is placed anywhere else that isn’t a building roof or a wall, like a large field or on a lake, it will cast shadow on the ground. When you have large areas cast in shadows, this impacts the growth on the plants and can screw up small animals in that region. Of course, when the panels are finally up and running, they do produce clean energy, even if it gets quarter cut in production due to all the coal that’s being burned to charge those solar cars.

To reiterate, production of any good takes resources, even especially invisible goods like electricity in your home. It comes from somewhere, and its making requires materials of its own and someone to make, even if it just one guy in a control room making sure shit doesn’t just explode. Whatever product you have in your hands now, be it a cup of coffee or a mouse, consider for a moment how many different individual elements of production it has gone through, and how many hands have been making it, before it ended up in your care. The number is, most likely, more than we can guess.

The smartphone market is changing

There has been a lot of news about Apple’s stock falling this week. Some celebrate that this spells the end of Apple and begins their downfall. Some are losing their hair over their stock value dropping like that. Some aren’t exactly caring about the whole deal, but find it interesting nevertheless. The thing is, when Apple became the first trillion dollar company, it got into a place where it shouldn’t been in the first place. Apple as a company isn’t exactly cutting edge.

The smartphone market is just like any other market out there. There will be market leaders with products that will be used more for a period of time before something else comes along and does things a little differently to cater to the new needs and wants of the consumer. The classic example of Facebook replacing MySpace is something that happens constantly, but the timescale with some products can be glacier. Sometimes the company that is replacing the product is doing it by themselves, like how Hasbro saw the falling sales of Transformers toys and relaunched the series with Beast Wars, which in all honestly saved the franchise. The reason why this worked was because from the mid 1980’s to late 1990’s the world experienced a kind of boom relating to land and sea animals. Star Trek IV: The Voyage Home was part of this green movement, where whales and dolphins got their share of newfound love. Seaquest DSV and the 1995 Flipper series hit this cultural consensus just the right way. The consumers wanted to have something different and there was need for innovation. Hasbro relaunched their money maker franchise and the rest is history.

Smartphone market is in a similar place at the moment. Apple beat Nokia by innovating the concept of the mobile phone by using existing ideas to make something new. Nokia had a chance to beat Apple, but they never launched the phone that would effectively been what the iPhone is, because the execs didn’t see a need to radically change their strategy. Execs tend to be rather rigid in their way of thinking in terms of products and the market, which is why the smartphone market currently sees very little to no innovation at the moment. Everything is incremental. The bezels are slightly thinner, screens are slightly larger (with a stupid notch there for whatever reason) and the usual tech advancements apply. Not many people are happy that the earphone jack, something that has been and still is a standard, has gone missing. In effect, there is really no reason to upgrade your phone if it’s not busted.

Apple’s innovation regarding iPhone has always been to stand on the shoulders of others. That’s completely normal for a tech company, though some would claim Apple didn’t exactly innovate while standing on those shoulders. Nevertheless, whatever Apple’s strategy is or was, it’s not working as intended. Falling sales can be directly related to the consumers being more or less full of Apple products and them not meeting the needs. Chinese competitors are producing phones that are simply better and have more style at a cheaper price, and don’t lock you to Apple’s own ecosystem, seem to have become more popular in Asia. Hell, in India Apple has only 1% market share, they just don’t jive with the wants of the consumers there.

The thing with Apple is that what they market first and most is lifestyle. This is somewhat ironic, as Steve Jobbs himself said something along the lines of When a company begins to market and stops innovating, it dies. That’s pretty much all Apple has going on for them at the moment. They are an alternative lifestyle company, offering inferior products to consumers who wish to stand out from the rest of the crowd. Apple’s device space is unbeatable currently, with all of their products existing in unison with each other, but outside that the hardware and software aren’t exactly the best possible. It’s sad to see the creative schools pushing for Macs, when the industries themselves use different system. Adobe works just as well on Windows as it does on Apple.

At some point Apple comes to a goal where they’ve reached their own market saturation and most people who can afford Apple  have entered the ecosystem. After that, sales will be harder to make and they have to make serious efforts to convince their customer to upgrade each device in the ecosystem. Hence, it’s not the best idea of put the blame of lack of sales and success on the consumer. Apple does offer some neat services and their personal security is top notch, but as said, the devices themselves haven’t exactly changed overall, and that applies to smartphones overall. At some point, the current paradigm is just driven to the ground and something else will come about. Apple has been toying with the watches and glasses, but they’re not what the consumers want.

That’s the crux of things really. A company can’t really blame on its consumers not wanting to upgrade if there is nothing new, no edge over the old stuff they already have. Innovation be damned, buzzword as it is, but none of these companies can ignore that the smartphone market is in a spot where things are becoming stagnant. What’ll be the next big thing is an open question, and very few people are even able to make an educated guess. Maybe we’ll get those flexible and bendable phones Nokia was talking about in 2008. Whatever it is, one can only hope it’ll be as massive paradigm shift that the smartphones were from their predecessors. Cleaning the slate does good at times.

Thousands of failures

Great design is like great translation; you don’t notice it unless you make the effort. The problem with this assumptions is that there is no design that would have universal acceptance. Let’s use something general as an example, something most of you use in your daily life, like a cupboard handle in your kitchen. Now that I’ve mentioned it, you’re probably conscious on of its shape but may not really know how it feels in your hand. After all, it’s just a handle you pull and push every day, probably multiple of times. This handle may be very ornate or just a simple shaped metal arch, but this handle is something you never really should be conscious about. At least not after you’ve finished your kitchen renovation that took ages, made your wife mad and probably ended up costing you an arm and a leg after you managed to screw up the installation process early on. There are more fitting handle shapes than there are hands, because the hands we have all can accept more than just one shape. We just tend to notice when the handle doesn’t really want to work with our own.

The numerous handles does not mean that there is an equal amount of successes. While there may be thousands of handles that fit just perfectly, the reality is that there probably has been five times or more discarded pieces that never moved beyond prototype phase. And sad reality is that some of these protos probably were better than the final product. For each successful product there are tens if not hundred unsuccessful attempts.

Even the most seasoned designer will make missteps and sometimes fails to realize what is self-evident to the consumer. This is why prototyping and giving enough time to finalise the product is incredibly important. Not just in design, but in every field. Sad thing is that no product is truly ready and will have to be released to the wild in good-enough state. Sadly, with games this good-enough has been lowered to many times that games are essentially being released half-finished in order to hit the publishing date, and the missing content or known bugs are fixed through Day-One patch. God I hate Day-One patches, it never bodes well.

How does a designer know he screwed up? In game industry it’s pretty clear, when the consumer feedback can be directed to the designer through forums and social media. Sales is second, but that only tells you that the product wasn’t met with the best acceptance out there. It’s not exactly easy to pinpoint why a kitchen handle didn’t make a breakthrough in the market, but we have to allow some leeway here; kitchen handles don’t tend to sell tons after initial launch. They’re not something people need to renew too often. If ever.

The easiest way of knowing what went wrong with a design would to have the user tell you outright. For a handle, where it chafes, what wrist position it does wrong, is the surface too sleek to cause slipping and so forth. Not exactly rocket science, but general consumer doesn’t really care to give such a feedback. Then again, door handles really aren’t a million dollar business, so losses from more experimental and niche products isn’t a big deal. The good old and time-tested basic shapes still rule the market.

Feedback is something all designers should want. I say should, as this splits opinions. To some a finalised product is as intended and it fills the role it has been given to. There is no reason to go change the product afterwards, no matter what the feedback is. Sadly, this doesn’t really bode well, and I’ve seen few companies go bankrupt due to the people in charge unwilling to change aspects of their products. After all, design isn’t art and doesn’t require the same respect of author’s intent. This goes to visual design as well, e.g. web design is very dependant on how the consumer can navigate the site. I’m sure all of us could give loads of feedback to websites about their current designs.

However, as said, the consumer isn’t really willing to give feedback, not when it’s really needed. The skill to read this feedback is important as well, as feedback on a product is not a personal assault. One needs to be professional and distance themselves properly in order to read through some of the harsher bits. The difficult part begins when you start applying that feedback and may start noticing that the very core idea of your handle had is slowly being discarded in the re-evaluation and redesign process. This can lead to more prototyping and more discarded pieces, but this sort of thing happens only to something that’s absolutely required for a task, like how the Xbox’s controller got completely redesigned for the Japanese market after the hulking beast of a controller got some feedback.

Of course, when you have no feedback to go outside sales, you’re forces to analyse what went wrong. Unless you have some people around you to get things re-tested or even have money to hire a test-group. Sometimes self-evaluation is cheaper and more effective than general feedback when the faults are apparent (though you never thought them up before even when the faults were staring in your face) and relatively easy to fix.

If a designer (or a company) manages to roll out a second, updated version of the product and makes their initial one obsolete, the initial release has been essentially trash. There’s no way getting around it. Even with best intentions, with loads of time put into and a lot of polishing on a product, a failure is a failure and one just has stand up and own their mistake to learn from it. Everybody is allowed to make mistakes, we just need to learn from them. A designer can’t continue to create products that repeat the same mistakes, like a cupboard handle that has sharp enough corners to cut your hand open when grasped.

Hard mode is now DLC

So I was intending to leave this Friday’s post on a somewhat positive note on Switch’s possible future after reading Shigsy’s interview with Time. The largest positive thing here is that Miyamoto slightly hints that the Switch in few ways seems to be Iwata’s final piece, giving feedback on portability and ideas in networking and communicating. How much of the current networking elements are from Iwata and how much is made disregarding his feedback is an open question. Iwata spearheaded the Wii and the DS, and if the Switch is anywhere near them in terms of idea and approach, then the Switch will definitely do better than the Wii U. Not that doing that should be all that challenging.

However, Miyamoto also speaks of virtual reality again. In essence, Nintendo is looking into VR at the moment, which ties itself to the obsession of 3D Nintendo still has. If you look how long Nintendo has been pushing the idea 3D with games, you can trace it back at least to Rad Racer if not further. You could almost make an argument that the more Nintendo tries to push 3D and VR as the main element of their machine, the worse it does.

VR currently has gone nowhere. After the initial boom of Virtual Reality, nothing has come out of it. No software has changed the industry or has set new standards. We’ve been told that VR will be at its peak in few years for few years now, and this repeats every time a VR product comes out. It’s not about lack of marketing or failing to market the product right. It’s about the common consumer not really giving a damn about t he VR in actuality, and most VR headsets we currently have are far too expensive for their own good. None of them work independently, which only adds to the costs. They’re a high-end luxury product at best with no content to back them up.

That said, Miyamoto cites Iwata talking about blue ocean and red ocean marketing, two points that his own actions seem to dismiss most of the time, but does commend Iwata for bringing this ideology to the front within the company. To quote what Shigsy said;

This is something that Mr. Iwata did, to really link the philosophy of Nintendo to some of the business and corporate jargon, while also being able to convey that to all of the employees at Nintendo.

Iwata had a presence both with the company and consumers. While Nintendo had few faces after Yamauchi, Iwata stood out. He was the company’s corporate face that managed to juggle between worlds. If you’re a fan of his, you’ll probably find elements in the Switch that underline Iwata’s approach as the head of the company.

Nintendo has many faces now that Iwata has passed. It’s not just not Miyamoto and Iwata any longer, but numerous of their developers have come to front even further. It’s like almost each game or franchise is now attached to a face. Like you have The Legend of Zelda tied to Aonuma.

The recent BotW announcement video killed pretty much all my personal hopes for the game being something special, mainly because it confirms that even when Aonuma is wearing something that resembles a suit, he still comes off sloppy. Still, the video does right by having subtitles instead of him trying to speak English.

The fact that Hard Mode is now DLC signifies that Breath of the Wild won’t be Zelda returning to its glory days as an action title that requires skill, but it’ll continue being a dungeon puzzler. Whether or not these DLC packs are an afterthought or not, it strikes very worrying. The Legend of Zelda had a completely new quest after the first round. Aonuma saying that they’d like to give seasoned veterans something new and fun is outright bullshit. New Items and skins don’t add to the game but in miniscule ways. A Challenge Mode was in previous Zeldas from the get-go. Additional map features do jack shit, unless the base map is terrible in the game. New original story and a dungeon with further challenges are nothing new or exciting. These are basic run-of-the-mill post-game stuff Zelda used to have. Modern Zelda tends to have a terrible replay value, but this DLC announcement hints BotW has worse replay value than normal.

I guess this shows how Nintendo is going to deal with the Switch overall, at least after the launch. The Switch requires extra purchases to be complete, like to purchase the Charging Power Grip because the bundled ones don’t charge. The game industry has been blamed for cutting their games into pieces to sell as DLC, and it really does feel like that at times. DLC is often developed with the main game and nowadays DLC is planned from the very beginning. Taken this into account, with the announcement video with Aonuma Nintendo effectively showed that they took parts that used to be standard parts of modern Zelda to some extent and made them DLC. The veterans they refer to are core Zelda modern fans.

Nintendo can’t have two dud of a console in their hands now. Twenty years ago they could have N64 under-perform when it came out much later than it was supposed to, and GameCube couldn’t stand against the rampaging truck that the PlayStation 2 was. The economy was completely different now than what it was in the late 1990’s and early 2000’s. The Wii U was pretty much a disaster, but perhaps even more so that the Virtual Boy as it was Nintendo’s main home console with full backing of the company in the vain of two of the most successful consoles in game history. Granted, not all machines can see the success of Game Boy. They could, if developed properly and the software library would see proper maintenance from the first and second party developers.

I’m still going to stick with Switch being more a success than the Wii U. However, if Zelda BotW is any indication for the future, there is a fly in the ointment.

Ageless games across generations

Video games have more in common with hide-and-seek than with movies, literature or music. This is due to video games, and electronic gaming in general, being the latest iteration of play culture. As such games of the past, be it the NES or Atari era, still find home within the new generation of consumers just as easily as any well planned out children’s play, game or even sports would. Only in video game industry we hear something become obsolete because of its archaic technology or because we have that aforementioned new generation. Soccer, basketball and numerous other sports still are around because they are ageless because each of them has been passed down to a new generation, just as children’s plays are.

Children will invent stories as they play along, be a costume play, playing with figures or something else. While there is a rudimentary narrative running in these plays, playing is the main thing. Electronic games, both PC and console games especially, are largely a legacy of these plays. The problem with electronic games is that they are static and can’t dynamically change as the player wants. This is why more varied games are always needed and the more unique titles we have, the better. The Legend of Zelda and Skyrim may be based on a similar notion of a hero in a fantasy land, but their realisation is different and serve different purposes. On the surface the ideas and even core structure seems similar. The reader already knows, the two games are vastly different in how they are played. Just like how the narrative in children’s plays are to enforce the action of playing rather than being the main thing, so do games use narrative as a support for playing the game. Changing it otherwise undermines both playing and gaming.

An ageless game will sell to future generations despite its technological backwardness. This is why emulation will never cease to exist, as anyone who knows the basic use of a computer and reading comprehension probably has already fired up at least one sort of emulator. As an anecdote, I’ve seen people as young as seven doing this without any outside help, and they enjoyed playing Super Mario Bros. on JNes. Why Super Mario Bros.? Because Mario is still a cultural icon, and using a Nintendo system most likely the one thing that people go for first. Not because of the modern entries in the series, but due to how large of an impact the franchise left on the face of culture in the 1980’s and early 1990’s.

Much like the game industry at large, those companies with a long history with electronic gaming often simply ignore the possibilities of their library. Instead, we may see plug-n-play conversions of some titles like with Atari 2600, but sometimes we get a piece of products that hits the cultural nerve just the right way and outsells itself to the point of amazing even the producers themselves. The NES Mini surprised Nintendo and its execs without any shadow of a doubt, as mentioned by Reggie in a CNET interview regarding the Switch. To quote him;

The challenge for us is that with this particular system, we thought honestly that the key consumer would be between 30 and 40 years old, with kids, who had stepped away from gaming for some period of time. And certainly we sold a lot of systems to that consumer.

Reggie claims that Nintendo is aware of the popularity of their classic games, which he contradicts with this statement. Furthermore, if they were aware how popular their classic games were, Nintendo would aim to make them obsolete rather than push games that enjoy less popularity. The NES Mini, as Reggie mentions above, wasn’t just popular with the people who grew up with the console, but with basically every age tier. Furthermore it should be noted that even in Europe the legacy of the NES has become that they were the victorious console, but do go back few entries to read how well Nintendo royally fucked NES in PAL territories.

It’s not just the nostalgia that sold NES Mini. As Reggie said, NES Mini is popular among kids, and kids have no nostalgia for a thirty years old game console. The games cherry picked for the system simply are mostly well designed and can stand the test of time. Super Mario Bros. does not appeal just because it is a Mario game, but because it’s a fun adventure in a fantasy land. Zelda‘s open world Action-RPG is popular outside the fans of the franchise (and I hope to God BotW will have an open world in the spirit if the original.) Metroid‘s action-adventure appeals similarly to a larger crowd than just to the fans, thou game devs have been furiously masturbating to this genre for the last years harshly.

There is nothing that would keep Nintendo from realizing the spirit of their older games in their future titles. Nothing keeps an old game from appealing to modern consumers, just like there’s nothing from modern children playing games invented couple of hundreds of years ago. We still play cards like Go Fish! or Shitpants with our kids. Hell, one could even say that when we grow into adults (or rather, we realize we are adults) we still keep playing the same games, but stakes are just higher. Poker may replace Go Fish!  but a new generation will still play that. A new card game for kids will appear in the future to supplement already large library of card games, but it’ll never be able replace anything if it doesn’t refine the formula somehow. Even then, it’s hard to beat a solid classic.

To use another Nintendo example is the Wii. Wii’s Virtual Console sold more titles than Nintendo’s big releases in the latter part of the console’s lifecycle, and saw a slow death on the 3DS. This seems to say that Nintendo doesn’t really take into heart the notion that classic games and their core are still viable. Instead, they concentrate on something surprising and that old games are only played due to nostalgia. A sentiment the game industry at large sadly seems to agree upon. With the success of NES Mini, will Nintendo begin to value their classic games more rather than just as the beginnings of an IP? Probably not, but Switch should tell us in due time.

Monthly Three; The time Nintendo lost Europe

When we speak of NES’ success, it really is more about the success Nintendo saw in the United States and Japan. Europe, on the other hand, Nintendo lost in the 8-bit era due to their own direct actions and inactions in overall terms of their home consoles. It wouldn’t be until the SNES when Nintendo would rise to be a household name. While the PC market and console market are largely separate business regions when you get down to it, despite modern game consoles being dumbed down PCs and all that, they do exist in parallel and can influence one another. The European home computer market of the 1980’s and early 1990’s, before the IBM revolution had set in permanently, did compete with the home consoles almost directly, but there is a good damn reason for that.

When Nintendo brought the NES to the European region, it had to fight a different fight than in the US. The US console market was dead at the time, but in many ways such thing didn’t exist in Europe. European home computers, like ZX Spectrum, Commodore 64 and Amstrad CPC had firm footing in European game markets. One could even go as far to say that console market didn’t exist in the same form in Europe as it did in the US and Japan, and Nintendo’s entry to into European markets would be difficult at best. Let’s be fair, the second time North American video game market crashed in the 1983 affected European market worth jack shit. Atari was more known for their computers than for their consoles across the Old World.

Markets is the keyword here that needs to be remembered, as Europe is not one nation like the United States. While I’m sure everybody is aware that each nation in Europe has their own distinguished culture, people and legislation, I do feel a need to emphasize that you are largely required to deal with each nation independently. The European Union has made some things easier when it comes to business trading, but the less I talk about the EU here the better.

One of the weirdest pulls Nintendo did for Europe was to split the PAL territory into two sub-territories when it came to locking, with Mattel handling distribution in the  so-called A-territory, while numerous other companies handled the B-territory. The Mattel branded territory also had Mattel produced NES variant, that looks exactly the same on the outside, except where it reads Mattel version and has that locking mechanism, keeping wrong region games from working on it. It doesn’t make much sense that you’d had to keep an eye on regional lockout within your own region, but that’s how Nintendo rolled, until in 1990 they established Nintendo of Europe to handle continent-wide dealings, kicking the Mattel version to the curb. One of the reasons was that the NES was relatively rare console, especially in the UK, where the console was sold in specifically selected stores, mainly chemists and such, for whatever odd reason. You’d think selling NES at Woolworths would’ve been the best idea, but no. This applied to games too, but the rest of the Europe saw both games and consoles being more widespread. However, they were still relatively rarer sight in the late 1980’s compared to the computer software.

Some of the companies that handled NES outside UK fared better, some worse. Spain was handled by Spaco, who were lazy with their game distribution, and at some point tried to emphasize their own titles over others. In all European countries games came out few years later than their US versions, thou it should be mentioned that Sweden was one of the countries that got the NES as early as 1986, whereas some saw the console released few years later. Bergsala handles Fennoscandia overall nowadays, but before they only handled Sweden. Norway was Unsaco’s region, whereas Funente originally dealt with Finland. Importing games from other countries was a common practice in Fennoscandia, though the NES still had to fight against computers like the C64. Digging up all the history European NES has would fill a whole book, thus the scope of this entry will be kept limited.

The second reason why Nintendo failed in the region was in the pricing of their games. While the US had always seen relatively high-priced games, the European market was almost the exact opposite. A standard NES release cost about £70 at the time, which turns into about 82€ or $86. Even now that price seems over the top. In comparison, Sega’s Master System had games going for some £25, or  about 34€ and $36. Even the Master System had lower sales than home computer software, that could see as low pricing as £10, or about 12€ / $12. Regional variants of course applied across the board, but the level of pricing didn’t change at any point. You just got less bang for you buck on the NES.

To add to this, the Sega MegaDrive saw PAL region release at a time when home computers were having a slight breakage point, and offered new games to play still at a lower price, making Super Nintendo’s market entry that much harder. Both Sega and Nintendo had American emphasizes titles as well, with Startropics being one of the best examples, and Sega’s overall strategy how to sell the Genesis in the US, but Europe had no saw no such emphasize. Even Sega tasked third-party companies to handle the PAL territory, such as Mastertronic in the UK, who marketed the Master System aggressively, selling the console an undercut price of £100. Sanura Suomi handled Master System in Finland. The Belenux countries were handled by Atoll, with them handling Sega’s licenses between 1987 and 1993. Only a handful of European exclusive titles exist compared to the US and Japan, and they’re not remembered all that fondly in the annals of gaming history, mostly because the historians rarely give a damn about European gaming.

Furthermore, game enthusiasts quickly noticed that the NES games ran slower than intended with black bars on the screen. This was due to different standards, where PAL region ran at 50hz and the NTSC ran at 60hz. Companies across the board didn’t give a flying fuck porting their games properly, instead doing a quick job and making their games run around 17% slower. Interestingly, the only game that properly optimised for the PAL region is Top Gun 2. A more interesting oddball of the bunch is Kirby’s Adventure, which was patched to have proper pitch and tempo in music while having the engine running at PAL’s 50hz. Except for Kirby itself, who moves at normal speed, so everything around him moves at 17% slower speed than intended. This kind of screwfuckery didn’t really install confidence towards Nintendo among European consumers. In the end, the NES didn’t penetrate the market, sold games at far higher price than any of its competitor and had less titles distributed that were worse than their NTSC counterparts in terms of

Because of these reasons, many third-party titles that American and Japanese audiences enjoyed on the NES were enjoyed in different forms on various home computers at much lower prices, and sometimes in superior versions too. This was the era, where ports of one arcade title was drastically different from one another. The current differences between ports are laughable at best in comparison.

The way the European markets preferred Sega and home computer products over the NES are directly due to how different the market was, and badly Nintendo handled themselves. The sheer amount of game software the home computers, and even the SMS, had at the time essentially made the rarer NES and its library a niche. Certainly, the NES saw a small renaissance in the very early 1990’s prior to the introduction of the SNES, but at this point it was already a lost battle. There were companies offering decently priced low-end and high-quality titles for other machines than the NES.

As such, it would do good to remember that while the disruption strategy works, each region requires equal amount of care in the manner that fits that said region. If a company were to push highly Japanese titles to America, it would fail. If a company would be pushing highly American titles to Japan, it would fail. Europe on the other hand is different, with each country having a different uptake on things. Countries like France and Italy at one point were the biggest European otakulands without them even noticing it, while others shunned both Japanese and American products, concentrating on their own titles. In order to succeed in European game markets further, companies had to learn some new tricks and utilise each nation’s or region’s specific nature to their advantage. European game markets have changed drastically since late 1980’s, and perhaps that’s for the better. However, the face of European game markets, and industry itself, left a mark that is still seen and felt how companies approach European consumers. Sometimes, they just don’t.

Demo the Trailer

There’s a rather lengthy writing on how there is no such a thing as a cinematic video game. It’s a good read, arguing largely on the same issues as this blog when it comes to storytelling in video games. If you can’t be arsed to read it, it essentially goes a long way to say that a game’s story ultimately is best when told through the medium itself; the game’s own play, not cutscenes or the like.

The question asked in the writing whether or not games need to be movies at all should be an outright No. Indeed, a player plays the game for the active play, and whenever he loses that active part e.g. in a pre-scripted sequence, the player’s interest wavers. Movies are different beasts altogether and have their own ways of doing things. Video game industry has relied too much on text and video in its storytelling, and the best thing coming from certain old school games is that they lacked both text and video to some extend and gameplay did tell the story. The game industry masturbates at their masterful storytelling never to realise most people seem to use Skip button more than anything else in these games. I’ve still yet to find a modern game that did storytelling better than The Legend of Zelda. Every step of that game is an adventure worth telling on its own.

PlayStation Expo was last weekend, and we saw a lot of trailers and some gameplay footage. There is an interesting disparity here, where the consumers get all hyped up because of pre-rendered footage that is aimed to make the game look as good as possible and often lacking in any sort of gameplay footage in itself. Game trailers, as much we might hate to admit it, are largely just about the cinematic flavour in the same sense as movie trailers are. Best bits picked into the trailer to show something nice to possibly track an interest. However, whereas with a movie trailer you may get the genuine idea what’s it all about, a trailer about a game lacks that punch as it has no interactive elements. It’s just footage of a game, or even worse, just footage of the videos inside the video.

To use The Legend of Zelda as an example again, a recent trailer for Breath of the Wild combines in-game videos with some gameplay footage with specifically selected sceneries. It’s also very boring to look at on every level. The direction isn’t anything to write home about, neither are the actual game content we get to see only a little bit of. All the enemies and NPC we see are boring as well. The music tries to hit your feelings, but only fanboys would falter at that point. Like if Mega Man X would just suddenly pop up as a Marvel VS Capcom character, same thing.

What the trailer does is that it shows you stuff that’s largely incoherent and has no context. The fantasy is represents isn’t classic Zelda, but Zelda games haven’t used their original source of fantasy for a long time now. It’s more like a Chinese knock-off now.

A trailer for a game does not meet the same qualifications as a trailer does for a movie. A game demo is to a game what a trailer is to a movie. However, for some years now a lot of people have been asking what has happened to game demos. All platforms seems to have less and less of them. There is no one concrete reason, thought the most common that gets mentioned is that a demo gives a straight and raw deal what the game is like, and seeing games’ overall quality has been stagnant, people simply aren’t interested in purchasing a game after trying out its demo. Jesse Schell argued in 2013 that games that have no demo sell better according to statistics. I don’t see a reason to argue otherwise three years later, seeing there is still a lack of demos.

If a demo cuts sales of a game, that means the game isn’t worthy in the eyes of the consumer to begin with. The less information the consumer gets, the better for the developer and publisher. Sucks to be the consumer who buys games without checking and double checking sources and Youtube videos how the game plays out, and even then there’s a lack of interactivity.

This is where raw gameplay footage serves a purpose, as do Let’s Plays. If trailers are made to simply sell you the game with the sleekest look possible only to fail you when you pop the game in and see how much everything has been downgraded from that spit spat shiny video, then raw gameplay and Let’s Plays are the opposite. Well, the opposite would be a game demo, but you get the point. The two showcase the game as it is in all of its naked glory and allows more direct and objective assessment on the quality of the product. Of course, no company really would prefer giving this sort of absolutely objective view on their game, unless the circumstances were controlled and hype would take over.

Hype and game trailers tend to go hand-in-hand with certain titles. Just as these trailers are made to hype us to hell and back, the hype keeps us from seeing possible flaws. Then you have ad people rising the fire even further and so on. Look how No Man’s Sky was hyped and how the product ended up being and you’ll see how much we need demos, but as consumer we can’t effect that point one bit. After all, we’re just money pouches to fund whatever personal glory trophy projects these innovative and creative gods of creation want to make.

I picked up Tokyo Xanadu eX+‘s demo recently and made the decision not to purchase the game until I can get it dirt cheap. The game does not stand up to Falcom’s brand overall. The demo’s content are largely boring and feels archaic, like something from a PS2 game. As a consumer I am glad I had the chance to personally assess the quality of the product to an extent before shoving my money into it. This should be a possibility for everyone when it comes to games, as developers couldn’t just dilly dally. The lack of demos is also one of the reasons why Steam allows consumers to return their games if they do not meet the expectations. Demos would have probably prevented this to a large degree.