Thousands of failures

Great design is like great translation; you don’t notice it unless you make the effort. The problem with this assumptions is that there is no design that would have universal acceptance. Let’s use something general as an example, something most of you use in your daily life, like a cupboard handle in your kitchen. Now that I’ve mentioned it, you’re probably conscious on of its shape but may not really know how it feels in your hand. After all, it’s just a handle you pull and push every day, probably multiple of times. This handle may be very ornate or just a simple shaped metal arch, but this handle is something you never really should be conscious about. At least not after you’ve finished your kitchen renovation that took ages, made your wife mad and probably ended up costing you an arm and a leg after you managed to screw up the installation process early on. There are more fitting handle shapes than there are hands, because the hands we have all can accept more than just one shape. We just tend to notice when the handle doesn’t really want to work with our own.

The numerous handles does not mean that there is an equal amount of successes. While there may be thousands of handles that fit just perfectly, the reality is that there probably has been five times or more discarded pieces that never moved beyond prototype phase. And sad reality is that some of these protos probably were better than the final product. For each successful product there are tens if not hundred unsuccessful attempts.

Even the most seasoned designer will make missteps and sometimes fails to realize what is self-evident to the consumer. This is why prototyping and giving enough time to finalise the product is incredibly important. Not just in design, but in every field. Sad thing is that no product is truly ready and will have to be released to the wild in good-enough state. Sadly, with games this good-enough has been lowered to many times that games are essentially being released half-finished in order to hit the publishing date, and the missing content or known bugs are fixed through Day-One patch. God I hate Day-One patches, it never bodes well.

How does a designer know he screwed up? In game industry it’s pretty clear, when the consumer feedback can be directed to the designer through forums and social media. Sales is second, but that only tells you that the product wasn’t met with the best acceptance out there. It’s not exactly easy to pinpoint why a kitchen handle didn’t make a breakthrough in the market, but we have to allow some leeway here; kitchen handles don’t tend to sell tons after initial launch. They’re not something people need to renew too often. If ever.

The easiest way of knowing what went wrong with a design would to have the user tell you outright. For a handle, where it chafes, what wrist position it does wrong, is the surface too sleek to cause slipping and so forth. Not exactly rocket science, but general consumer doesn’t really care to give such a feedback. Then again, door handles really aren’t a million dollar business, so losses from more experimental and niche products isn’t a big deal. The good old and time-tested basic shapes still rule the market.

Feedback is something all designers should want. I say should, as this splits opinions. To some a finalised product is as intended and it fills the role it has been given to. There is no reason to go change the product afterwards, no matter what the feedback is. Sadly, this doesn’t really bode well, and I’ve seen few companies go bankrupt due to the people in charge unwilling to change aspects of their products. After all, design isn’t art and doesn’t require the same respect of author’s intent. This goes to visual design as well, e.g. web design is very dependant on how the consumer can navigate the site. I’m sure all of us could give loads of feedback to websites about their current designs.

However, as said, the consumer isn’t really willing to give feedback, not when it’s really needed. The skill to read this feedback is important as well, as feedback on a product is not a personal assault. One needs to be professional and distance themselves properly in order to read through some of the harsher bits. The difficult part begins when you start applying that feedback and may start noticing that the very core idea of your handle had is slowly being discarded in the re-evaluation and redesign process. This can lead to more prototyping and more discarded pieces, but this sort of thing happens only to something that’s absolutely required for a task, like how the Xbox’s controller got completely redesigned for the Japanese market after the hulking beast of a controller got some feedback.

Of course, when you have no feedback to go outside sales, you’re forces to analyse what went wrong. Unless you have some people around you to get things re-tested or even have money to hire a test-group. Sometimes self-evaluation is cheaper and more effective than general feedback when the faults are apparent (though you never thought them up before even when the faults were staring in your face) and relatively easy to fix.

If a designer (or a company) manages to roll out a second, updated version of the product and makes their initial one obsolete, the initial release has been essentially trash. There’s no way getting around it. Even with best intentions, with loads of time put into and a lot of polishing on a product, a failure is a failure and one just has stand up and own their mistake to learn from it. Everybody is allowed to make mistakes, we just need to learn from them. A designer can’t continue to create products that repeat the same mistakes, like a cupboard handle that has sharp enough corners to cut your hand open when grasped.

Hard mode is now DLC

So I was intending to leave this Friday’s post on a somewhat positive note on Switch’s possible future after reading Shigsy’s interview with Time. The largest positive thing here is that Miyamoto slightly hints that the Switch in few ways seems to be Iwata’s final piece, giving feedback on portability and ideas in networking and communicating. How much of the current networking elements are from Iwata and how much is made disregarding his feedback is an open question. Iwata spearheaded the Wii and the DS, and if the Switch is anywhere near them in terms of idea and approach, then the Switch will definitely do better than the Wii U. Not that doing that should be all that challenging.

However, Miyamoto also speaks of virtual reality again. In essence, Nintendo is looking into VR at the moment, which ties itself to the obsession of 3D Nintendo still has. If you look how long Nintendo has been pushing the idea 3D with games, you can trace it back at least to Rad Racer if not further. You could almost make an argument that the more Nintendo tries to push 3D and VR as the main element of their machine, the worse it does.

VR currently has gone nowhere. After the initial boom of Virtual Reality, nothing has come out of it. No software has changed the industry or has set new standards. We’ve been told that VR will be at its peak in few years for few years now, and this repeats every time a VR product comes out. It’s not about lack of marketing or failing to market the product right. It’s about the common consumer not really giving a damn about t he VR in actuality, and most VR headsets we currently have are far too expensive for their own good. None of them work independently, which only adds to the costs. They’re a high-end luxury product at best with no content to back them up.

That said, Miyamoto cites Iwata talking about blue ocean and red ocean marketing, two points that his own actions seem to dismiss most of the time, but does commend Iwata for bringing this ideology to the front within the company. To quote what Shigsy said;

This is something that Mr. Iwata did, to really link the philosophy of Nintendo to some of the business and corporate jargon, while also being able to convey that to all of the employees at Nintendo.

Iwata had a presence both with the company and consumers. While Nintendo had few faces after Yamauchi, Iwata stood out. He was the company’s corporate face that managed to juggle between worlds. If you’re a fan of his, you’ll probably find elements in the Switch that underline Iwata’s approach as the head of the company.

Nintendo has many faces now that Iwata has passed. It’s not just not Miyamoto and Iwata any longer, but numerous of their developers have come to front even further. It’s like almost each game or franchise is now attached to a face. Like you have The Legend of Zelda tied to Aonuma.

The recent BotW announcement video killed pretty much all my personal hopes for the game being something special, mainly because it confirms that even when Aonuma is wearing something that resembles a suit, he still comes off sloppy. Still, the video does right by having subtitles instead of him trying to speak English.

The fact that Hard Mode is now DLC signifies that Breath of the Wild won’t be Zelda returning to its glory days as an action title that requires skill, but it’ll continue being a dungeon puzzler. Whether or not these DLC packs are an afterthought or not, it strikes very worrying. The Legend of Zelda had a completely new quest after the first round. Aonuma saying that they’d like to give seasoned veterans something new and fun is outright bullshit. New Items and skins don’t add to the game but in miniscule ways. A Challenge Mode was in previous Zeldas from the get-go. Additional map features do jack shit, unless the base map is terrible in the game. New original story and a dungeon with further challenges are nothing new or exciting. These are basic run-of-the-mill post-game stuff Zelda used to have. Modern Zelda tends to have a terrible replay value, but this DLC announcement hints BotW has worse replay value than normal.

I guess this shows how Nintendo is going to deal with the Switch overall, at least after the launch. The Switch requires extra purchases to be complete, like to purchase the Charging Power Grip because the bundled ones don’t charge. The game industry has been blamed for cutting their games into pieces to sell as DLC, and it really does feel like that at times. DLC is often developed with the main game and nowadays DLC is planned from the very beginning. Taken this into account, with the announcement video with Aonuma Nintendo effectively showed that they took parts that used to be standard parts of modern Zelda to some extent and made them DLC. The veterans they refer to are core Zelda modern fans.

Nintendo can’t have two dud of a console in their hands now. Twenty years ago they could have N64 under-perform when it came out much later than it was supposed to, and GameCube couldn’t stand against the rampaging truck that the PlayStation 2 was. The economy was completely different now than what it was in the late 1990’s and early 2000’s. The Wii U was pretty much a disaster, but perhaps even more so that the Virtual Boy as it was Nintendo’s main home console with full backing of the company in the vain of two of the most successful consoles in game history. Granted, not all machines can see the success of Game Boy. They could, if developed properly and the software library would see proper maintenance from the first and second party developers.

I’m still going to stick with Switch being more a success than the Wii U. However, if Zelda BotW is any indication for the future, there is a fly in the ointment.

Ageless games across generations

Video games have more in common with hide-and-seek than with movies, literature or music. This is due to video games, and electronic gaming in general, being the latest iteration of play culture. As such games of the past, be it the NES or Atari era, still find home within the new generation of consumers just as easily as any well planned out children’s play, game or even sports would. Only in video game industry we hear something become obsolete because of its archaic technology or because we have that aforementioned new generation. Soccer, basketball and numerous other sports still are around because they are ageless because each of them has been passed down to a new generation, just as children’s plays are.

Children will invent stories as they play along, be a costume play, playing with figures or something else. While there is a rudimentary narrative running in these plays, playing is the main thing. Electronic games, both PC and console games especially, are largely a legacy of these plays. The problem with electronic games is that they are static and can’t dynamically change as the player wants. This is why more varied games are always needed and the more unique titles we have, the better. The Legend of Zelda and Skyrim may be based on a similar notion of a hero in a fantasy land, but their realisation is different and serve different purposes. On the surface the ideas and even core structure seems similar. The reader already knows, the two games are vastly different in how they are played. Just like how the narrative in children’s plays are to enforce the action of playing rather than being the main thing, so do games use narrative as a support for playing the game. Changing it otherwise undermines both playing and gaming.

An ageless game will sell to future generations despite its technological backwardness. This is why emulation will never cease to exist, as anyone who knows the basic use of a computer and reading comprehension probably has already fired up at least one sort of emulator. As an anecdote, I’ve seen people as young as seven doing this without any outside help, and they enjoyed playing Super Mario Bros. on JNes. Why Super Mario Bros.? Because Mario is still a cultural icon, and using a Nintendo system most likely the one thing that people go for first. Not because of the modern entries in the series, but due to how large of an impact the franchise left on the face of culture in the 1980’s and early 1990’s.

Much like the game industry at large, those companies with a long history with electronic gaming often simply ignore the possibilities of their library. Instead, we may see plug-n-play conversions of some titles like with Atari 2600, but sometimes we get a piece of products that hits the cultural nerve just the right way and outsells itself to the point of amazing even the producers themselves. The NES Mini surprised Nintendo and its execs without any shadow of a doubt, as mentioned by Reggie in a CNET interview regarding the Switch. To quote him;

The challenge for us is that with this particular system, we thought honestly that the key consumer would be between 30 and 40 years old, with kids, who had stepped away from gaming for some period of time. And certainly we sold a lot of systems to that consumer.

Reggie claims that Nintendo is aware of the popularity of their classic games, which he contradicts with this statement. Furthermore, if they were aware how popular their classic games were, Nintendo would aim to make them obsolete rather than push games that enjoy less popularity. The NES Mini, as Reggie mentions above, wasn’t just popular with the people who grew up with the console, but with basically every age tier. Furthermore it should be noted that even in Europe the legacy of the NES has become that they were the victorious console, but do go back few entries to read how well Nintendo royally fucked NES in PAL territories.

It’s not just the nostalgia that sold NES Mini. As Reggie said, NES Mini is popular among kids, and kids have no nostalgia for a thirty years old game console. The games cherry picked for the system simply are mostly well designed and can stand the test of time. Super Mario Bros. does not appeal just because it is a Mario game, but because it’s a fun adventure in a fantasy land. Zelda‘s open world Action-RPG is popular outside the fans of the franchise (and I hope to God BotW will have an open world in the spirit if the original.) Metroid‘s action-adventure appeals similarly to a larger crowd than just to the fans, thou game devs have been furiously masturbating to this genre for the last years harshly.

There is nothing that would keep Nintendo from realizing the spirit of their older games in their future titles. Nothing keeps an old game from appealing to modern consumers, just like there’s nothing from modern children playing games invented couple of hundreds of years ago. We still play cards like Go Fish! or Shitpants with our kids. Hell, one could even say that when we grow into adults (or rather, we realize we are adults) we still keep playing the same games, but stakes are just higher. Poker may replace Go Fish!  but a new generation will still play that. A new card game for kids will appear in the future to supplement already large library of card games, but it’ll never be able replace anything if it doesn’t refine the formula somehow. Even then, it’s hard to beat a solid classic.

To use another Nintendo example is the Wii. Wii’s Virtual Console sold more titles than Nintendo’s big releases in the latter part of the console’s lifecycle, and saw a slow death on the 3DS. This seems to say that Nintendo doesn’t really take into heart the notion that classic games and their core are still viable. Instead, they concentrate on something surprising and that old games are only played due to nostalgia. A sentiment the game industry at large sadly seems to agree upon. With the success of NES Mini, will Nintendo begin to value their classic games more rather than just as the beginnings of an IP? Probably not, but Switch should tell us in due time.

Monthly Three; The time Nintendo lost Europe

When we speak of NES’ success, it really is more about the success Nintendo saw in the United States and Japan. Europe, on the other hand, Nintendo lost in the 8-bit era due to their own direct actions and inactions, saw increased success with the SNES, but in overall terms their home consoles. While the PC market and console market are largely separate business regions when you get down to it, despite modern game consoles being dumbed down PCs and all that, they do exist in parallel and can influence one another. The European home computer market of the 1980’s and early 1990’s before the IBM revolution had set in permanently did compete with the home consoles almost directly, but there is a good damn reason for that.

When Nintendo brought the NES to the European region, it had to fight a different fight than in the US. The US console market was dead at the time, but in many ways such thing didn’t exist in Europe. European home computers, like ZX Spectrum, Commodore 64 and Amstrad CPC had firm footing in European game markets. One could even go as far to say that console market didn’t exist in the same form in Europe as it did in the US and Japan, and Nintendo’s entry to into European markets would be difficult at best. Let’s be fair, the second time North American video game market crashed in the 1983 affected European market worth jack shit. Atari was more known for their computers than for their consoles across the Old World.

Markets is the keyword here that needs to be remembered, as Europe is not one nation like the United States. While I’m sure everybody is aware that each nation in Europe has their own distinguished culture, people and legislation, I do feel a need to emphasize that you are largely required to deal with each nation independently. The European Union has made some things easier when it comes to business trading, but the less I talk about the EU here the better.

One of the weirdest pull Nintendo did for Europe was to split the PAL territory into two sub-territories when it came to locking, with Mattel handling distribution in the  so-called A-territory, while numerous other companies handled the B-territory. The Mattel branded territory also had Mattel produced NES variant, that looks exactly the same on the outside, except where it reads Mattel version and has that locking mechanism, keeping games from working on it. It doesn’t make much sense that you’d had to keep an eye on regional lockout within your own region, but that’s how Nintendo rolled, until in 1990 they established Nintendo of Europe to handle continent-wide dealings, kicking the Mattel version to the curb. One of the reasons was this that the NES was relatively rare console, especially in the UK, where the console was sold in specifically selected stores, mainly chemists and such, for whatever odd reason. You’d think selling NES at Woolworths would’ve been the best idea, but no. This applied to games too, but the rest of the Europe saw both games and consoles being more widespread. However, they were still relatively rarer sight in the late 1980’s compared to the computer software.

Some of the companies that handled NES outside UK fared better, some worse. Spain was handled by Spaco, who were lazy with their game distribution, and at some point tried to emphasize their own titles over others. In all European countries games came out few years later than their US versions, thou it should be mentioned that Sweden was one of the countries that got the NES as early as 1986, whereas some saw the console released few years later. Bergsala handles Fennoscandia overall nowadays, but before they only handled Sweden, Norway was Unsaco’s region, whereas Funente originally dealt with Finland. Importing games from other countries was a common practice in Fennoscandia, though the NES still had to fight against computers like the C64. Digging up all the history European NES has would fill a whole book, thus the scope of this entry will be kept limited.

The second reason why Nintendo failed the region was in the pricing of their games. While the US had always seen relatively high-priced games, the European market was almost the exact opposite. A standard NES release cost about £70 at the time, which turns into about 82€ or $86. Even now that price seems over the top. In comparison, Sega’s Master System had games going for some £25, or  about 34€ and $36. Even the Master System had lower sales than home computer software, that could see as low pricing as £10, or about 12€ / $12. Regional variants of course applied across the board, but the level of pricing didn’t change at any point. You just got less bang for you buck on the NES.

To add to this, the Sega MegaDrive saw PAL region release at a time when home computers were having a slight breakage point, and offered new games to play still at a lower price, making Super Nintendo’s market entry that much harder. Both Sega and Nintendo had American emphasizes titles as well, with Startropics being one of the best examples, and Sega’s overall strategy how to sell the Genesis in the US, but Europe had no saw no such emphasize. Even Sega tasked third-party companies to handle the PAL territory, such as Mastertronic in the UK, who marketed the Master System aggressively, selling the console an undercut price of £100. Sanura Suomi handled Master System in Finland, while the Belenux countries were Atoll handled Sega’s licenses between 1987 and 1993. Only a handful of European exclusive titles exist compared to the US and Japan, and they’re not remembered all that fondly in the annals of gaming history, mostly because the historians rarely give a damn about European gaming.

Furthermore, game enthusiasts quickly noticed that the NES games ran slower than intended with black bars on the screen. This was due to different standards, where PAL region ran at 50hz and the NTSC ran at 60hz. Companies across the board didn’t give a flying fuck porting their games properly, instead doing a quick job and making their games run around 17% slower. Interestingly, the only game that properly optimised for the PAL region is Top Gun 2. A more interesting oddball of the bunch is Kirby’s Adventure, which was patched to have proper pitch and tempo in music while having the engine running at PAL’s 50hz. Except for Kirby itself, who moves at normal speed, so everything around him moves at 17% slower speed than intended. This kind of screwfuckery didn’t really install confidence towards Nintendo among European consumers. In the end, the NES didn’t penetrate the market, sold games at far higher price than any of its competitor and had less titles distributed that were worse than their NTSC counterparts in terms of

Because of these reasons, many third-party titles that American and Japanese audiences enjoyed on the NES were enjoyed in different forms on various home computers at much lower prices, and sometimes in superior versions too. This was the era, where ports of one arcade title was drastically different from one another. The current differences between ports are laughable at best in comparison.

The way the European markets preferred Sega and home computer products over the NES are directly due to how different the market was, and badly Nintendo handled themselves. The sheer amount of game software the home computers, and even the SMS, had at the time essentially made the rarer NES and its library a niche. Certainly, the NES saw a small renaissance in the very early 1990’s prior to the introduction of the SNES, but at this point it was already a lost battle. There were companies offering decently priced low-end and high-quality titles for other machines than the NES.

As such, it would do good to remember that while the disruption strategy works, each region requires equal amount of care in the manner that fits that said region. If a company were to push highly Japanese titles to America, it would fail. If a company would be pushing highly American titles to Japan, it would fail. Europe on the other hand is different, with each country having a different uptake on things. Countries like France and Italy at one point were the biggest European otakulands without them even noticing it, while others shunned both Japanese and American products, concentrating on their own titles. In order to succeed in European game markets further, companies had to learn some new tricks and utilise each nation’s or region’s specific nature to their advantage. European game markets have changed drastically since late 1980’s, and perhaps that’s for the better. However, the face of European game markets, and industry itself, left a mark that is still seen and felt how companies approach European consumers. Sometimes, they just don’t.

Demo the Trailer

There’s a rather lengthy writing on how there is no such a thing as a cinematic video game. It’s a good read, arguing largely on the same issues as this blog when it comes to storytelling in video games. If you can’t be arsed to read it, it essentially goes a long way to say that a game’s story ultimately is best when told through the medium itself; the game’s own play, not cutscenes or the like.

The question asked in the writing whether or not games need to be movies at all should be an outright No. Indeed, a player plays the game for the active play, and whenever he loses that active part e.g. in a pre-scripted sequence, the player’s interest wavers. Movies are different beasts altogether and have their own ways of doing things. Video game industry has relied too much on text and video in its storytelling, and the best thing coming from certain old school games is that they lacked both text and video to some extend and gameplay did tell the story. The game industry masturbates at their masterful storytelling never to realise most people seem to use Skip button more than anything else in these games. I’ve still yet to find a modern game that did storytelling better than The Legend of Zelda. Every step of that game is an adventure worth telling on its own.

PlayStation Expo was last weekend, and we saw a lot of trailers and some gameplay footage. There is an interesting disparity here, where the consumers get all hyped up because of pre-rendered footage that is aimed to make the game look as good as possible and often lacking in any sort of gameplay footage in itself. Game trailers, as much we might hate to admit it, are largely just about the cinematic flavour in the same sense as movie trailers are. Best bits picked into the trailer to show something nice to possibly track an interest. However, whereas with a movie trailer you may get the genuine idea what’s it all about, a trailer about a game lacks that punch as it has no interactive elements. It’s just footage of a game, or even worse, just footage of the videos inside the video.

To use The Legend of Zelda as an example again, a recent trailer for Breath of the Wild combines in-game videos with some gameplay footage with specifically selected sceneries. It’s also very boring to look at on every level. The direction isn’t anything to write home about, neither are the actual game content we get to see only a little bit of. All the enemies and NPC we see are boring as well. The music tries to hit your feelings, but only fanboys would falter at that point. Like if Mega Man X would just suddenly pop up as a Marvel VS Capcom character, same thing.

What the trailer does is that it shows you stuff that’s largely incoherent and has no context. The fantasy is represents isn’t classic Zelda, but Zelda games haven’t used their original source of fantasy for a long time now. It’s more like a Chinese knock-off now.

A trailer for a game does not meet the same qualifications as a trailer does for a movie. A game demo is to a game what a trailer is to a movie. However, for some years now a lot of people have been asking what has happened to game demos. All platforms seems to have less and less of them. There is no one concrete reason, thought the most common that gets mentioned is that a demo gives a straight and raw deal what the game is like, and seeing games’ overall quality has been stagnant, people simply aren’t interested in purchasing a game after trying out its demo. Jesse Schell argued in 2013 that games that have no demo sell better according to statistics. I don’t see a reason to argue otherwise three years later, seeing there is still a lack of demos.

If a demo cuts sales of a game, that means the game isn’t worthy in the eyes of the consumer to begin with. The less information the consumer gets, the better for the developer and publisher. Sucks to be the consumer who buys games without checking and double checking sources and Youtube videos how the game plays out, and even then there’s a lack of interactivity.

This is where raw gameplay footage serves a purpose, as do Let’s Plays. If trailers are made to simply sell you the game with the sleekest look possible only to fail you when you pop the game in and see how much everything has been downgraded from that spit spat shiny video, then raw gameplay and Let’s Plays are the opposite. Well, the opposite would be a game demo, but you get the point. The two showcase the game as it is in all of its naked glory and allows more direct and objective assessment on the quality of the product. Of course, no company really would prefer giving this sort of absolutely objective view on their game, unless the circumstances were controlled and hype would take over.

Hype and game trailers tend to go hand-in-hand with certain titles. Just as these trailers are made to hype us to hell and back, the hype keeps us from seeing possible flaws. Then you have ad people rising the fire even further and so on. Look how No Man’s Sky was hyped and how the product ended up being and you’ll see how much we need demos, but as consumer we can’t effect that point one bit. After all, we’re just money pouches to fund whatever personal glory trophy projects these innovative and creative gods of creation want to make.

I picked up Tokyo Xanadu eX+‘s demo recently and made the decision not to purchase the game until I can get it dirt cheap. The game does not stand up to Falcom’s brand overall. The demo’s content are largely boring and feels archaic, like something from a PS2 game. As a consumer I am glad I had the chance to personally assess the quality of the product to an extent before shoving my money into it. This should be a possibility for everyone when it comes to games, as developers couldn’t just dilly dally. The lack of demos is also one of the reasons why Steam allows consumers to return their games if they do not meet the expectations. Demos would have probably prevented this to a large degree.

The 4K generation

During the last generation Xbox 360 and PlayStation 3 were dubbed the HD Twins. Not necessarily by the industry itself, but at least a small amount of people. While the current generation, that will be usurped by the Switch next year, started with HD as well. However, seeing we’re again at a point where companies do mid-generation upgrades instead of just mid-generation re-design, I’ll be dubbing PlayStation 4 Pro and Project Scorpio, whatever its finalised name will be, the 4K Twins. Technically, Xbox One S should be added there too, but the games are upscaled to 4K rather being native. Call me nitpicky.

The 4K is a bit of a problem, because most people don’t have 4K screens yet. Just like HD became a thing in the last generation, it’ll take some time before 4K becomes a standard. Scratch that, technically 4K is a standard already, but the standard is not widespread within the general population in the Western world like. It takes time for people to adopt the latest cutting edge technology, and that’s good. Why is that good, people are holding technology back! I’ve heard someone ask. The reason is that despite some technology not being able to sell well at first, the reasons can be many. The high price, the unnecessary complex nature and usage, quality and sometimes not being wanted often are the more pressing elements. LCD television technology itself is a good example of this as both LCD, Plasma and CRT television existed, and to some extent still exist, beside each other.

It would seem that the general population prefers to have mature technology in their hands instead of cutting edge.

The 4K and HD will exist beside each other at least to the end of the century, if not more. This is a guess on my part, but seeing how 8K is already making its first initial rounds, investing into a 4K screen might feel a bit off. Then again, that is the evolution of technology. Something new will always be waiting just around the corner. That’s why we always come to a point where we can pick something new up, or wait until things are ironed out and becomes more affordable.

That doesn’t really work with the 4K twins.

If these were redesigns of the existing consoles like what we’ve seen in the past, there would be no real contest which one to pick up. Usually. The last version of PS3 is just ugly. The issues with PS4 Pro and the upcoming Scorpio will have whole slew of new problems that have not yet been fixed. Mainly because we have don’t have an idea what those problems are, but most likely both companies are well aware of the issues with their machines prior to launch. The Red Ring of Death is still something that looms over Microsoft’s machines. I haven’t heard any major malfunctions from this generation, outside some people seemingly having a bricked Wii U thanks to Mighty Number 9, but at least one person has reported a molten PS4Pro. Take that as a grain of salt and do some research on the whole thing. Every thing’s possible, I guess. I’m no plastics expert. However, ever a single case like this usually rings the alarm bells in people’s heads.

The whole possibly molten PS4P aside, the issue that we should be more aware is the performance issues. Perhaps the hardware found in the PS4P is of higher calibre than the base PS4’s, but that should also mean that the games should run at a higher quality. Yet, if we take Digital Foundry‘s reports true, some games run worse on PS4P for whatever reason. Be it because of the new hardware or lack of optimisation (or the lack of experience in optimisation on PS4P) this is something I wouldn’t accept. But Aalt, aren’t you the one who says graphics and hardware doesn’t matter? Yes, yes I am and I’m getting to that.

The whole deal with mid-generation updates is, by all means, to allow the developers to put better looking stuff out there and have their games run better. In reality, this thought goes only halfway through. Devs most likely will push for better looking stuff, but will continue to ignore optimisation and 60fps lock if the game needs to be out. Some titles will sell with their name alone, damned be the quality of the title. The design quality of a game should not be dependent on the hardware. The controller a game is played with affects more the design than the hardware, thou we all can agree that simple number crunching power can allow some neat things overall. In the end, it’s the design that counts. What design, well, that’s another post.

Now, the question I have about the PS4P, and Scorpio by that extent, if we should be an early adopter or sit back and wait the kinks being ironed out. Honestly, that’s up to you. Some places recommend getting the base version for normal 1080 screens and some say go for Pro anyway. I’d recommend just checking the facts out and making a decision on those.

But, there’s another quick thing; should we all just jump in with the latest tech and keep things rolling around at the speed of sound? No, because that’s impossible. As said, most prefer mature technology and even tech that’s half a decade old can feel the most wondrous when properly designed and put into use. Those who didn’t experience Laserdisc’s abilities to have multiple languages on the disc were in awe by  DVD’s ability to house such things. There’s also the point that not all people simply have the money to keep up with the pace. As such, expecting companies to have things living beside each other is to be expected and that is exactly why  SONY has not yet moved the base PS4 from the market. People will simply pick it up for its price alone and might have rationale reasons not to go for the more expensive piece.

You can future proof your technological choices only so far. At some point, all your equipment will be old and replaced with new standards. Old does not mean obsoleted, and old can be of service years more than the newfangled piece of tech with all the problems still laying in the shadows.

I admit that this post was, to some extent, me putting my own struggle with the current generation down and to try make sense how to proceed in purchasing a console, or if I should even make a purchase overall.

Divided by six thousand

Time is money, and accuracy demands time. This may not sound like a thought that we don’t know, but yet most often than not fail to realize that we live in a world where most things are not at not perfectly accurate. No, I’m not talking about journalism, I’m talking parts making and design.

In production we have four levels of tolerances ranging from very rough to very fine. Very rough is essentially things just done to get them finished without any care about the end quality as the maximum tolerances are around ±1mm. 1mm does not sound a lot, yet depending on the spot that margin of error can really make all the differences. For something like tractor, where you have a lot of parts that are under dire stress, the accuracy isn’t all that vital. As long as it works. Certain medical equipment on the other hand are required to be at certain size to the thousandth of a millimetre at most for the sake of the patient.

A craftsman who works with machines by hand has to gain rather large amount of experience before he has the skill to truly work within the finer points of accuracy. Experience is a major factor, as the machines we work with are not accurate themselves to a degree. Double checking levels and re-adjusting alignments as needed doesn’t really cut it, that needs to be done almost every time a work is being started to make sure things are straight. Accuracy starts with prepping and planning.

Of course, the modern CNC production has made accuracy more or less self-evident to most. The machines’ movement accuracy is nearly perfect and dependent on the systems’ own measurements inside, and the setting the user’s input. A designer has his workload here to design an item that can be machined properly and consider the dimensions of the objects.

Nevertheless, even with CNC machining, the amount of steps the machine has to make to ensure proper surface with proper tolerances can go two-way. A rough milling will leave the surface with a surface that most wouldn’t like and the corners and cuts may be nearly or even outside the tolerances. Even from a machine it takes time to properly finish the item to a finer degree. Often much less than what it would take from a craftsman, and more of than not factories don’t even have individual lathes or milling machines for mass production, just for parts repairs and prototyping.

Just like when design is at its best when you don’t really notice it, accurate tolerances are something that you may notice one in a while, but take it for granted most of the time. Things just have to fit in order for them to work, and that’s how it should be.

And yes, I totally agree. However, it also has to be valued. Object accuracy, to make sure that parts just fit together, is so self-evident that we barely give any thought how important it is to our lives. It’s natural, yet the challenge to have accurate objects rises as the required accuracy goes up. Almost exponentially so. Sure, we could always finish up an item with a sandpaper and a very fine file, but that’s not really doable in modern world. Speed and efficiency have to be considered, and we don’t have the time to dilly dally to get something just perfect. This may sting your ear a bit, but good enough is satisfactory more often than not.

However, it’s also interesting to notice that most modern designers work with absolute measures rather than within tolerances to some degree. Personally I always rally for designers to work with production tools their designs will be realized with to understand the steps and methods needed to produce their design. A craftsman tends to design within or just slightly beyond his skill set to push himself just a bit further down.

If you read into this entry a bit deeper, you might notice that this is part of a theme I’ve popped up here and there; the change of traditional design and craftsmanship being more or less replaced by modern technology. That is not a negative thing in itself, that’s change and evolution. Creating a crown is traditionally thought to be work for the artisans and jewellery makers, but nowadays we have designers and machines that can objectively make better products at a lower cost than the traditional craftsmen.

However, the work these traditional craftsmen do is barely visible and only certain fields are valued to any significant extent. I’m not even sure how well people are informed what sort of job a machinist, for example, has in his hands when he gets the plans.

We live in an age where we can substitute a traditional craft with one person with one machine. Not only is it more effective and faster, but also cheaper for those very reasons. I started this post about accuracy and how it costs money, but here I’m starting to end with a thought that in the future we might not even have the requirement for those traditional crafts and accuracy has become even more mundane that what it already is.

Each craft tends to think theirs isn’t valued enough, but perhaps that’s true. Everybody should be appreciated other fields of work just as much as they value theirs. Nevertheless, a thing like being an artisan might be one of the more useless jobs in the world, in the end, as their niche of being able to produce and design products is becoming a mundane every day thing with the advent of 3D printers and machines far superior to men.

It’s not a thought I amuse lightly. The fact is that the world demands further production and better prices, and work by hand costs. Machining may not have the same spirit and individuality, but it gets things done helluva lot faster and more efficient. New tools replace the old, names and professions change, but the demands and needs don’t change too much. Work can become obsoleted by progression, unless we consciously keep it alive.