When looking back at these last few generations of gaming consoles, sometimes it seems like they have been exceptional in some ways. Not in terms of games, quality or the like, but the machines themselves. Outside Nintendo’s offerings, the HD Twins, as they were called, don’t really separate themselves too much anymore from what they do and how. Both Sony and Microsoft tend to push similar boundaries with their consoles without really doing anything special on the side. Microsoft has that whole Windows ecology to work with, and the Xbox brand has become their universal mark of gaming, more or less. Sony’s jumping the multiplatform cross-over play, for whatever reason, but I guess now that developers can make shit work across all major platforms is a positive thing to have in your back pocket. Then you have the whole upgraded systems thing, which hasn’t been a thing since the second generation of consoles, but came back rather hard with all the new upgraded consoles all the three major console companies have been pumping out. Guess the first modern example would be DSi.

One thing that seems to be making a comeback is games spanning multiple discs. Historically speaking this has always been a thing in gaming, with old PC games spanning multiples diskettes. I remember Beneath the Steel Sky coming on fifteen disks on the Amiga. X-Plane 10 supposedly spans eight DVDs. Everquest 2 was on ten CDs when it was released. Command and Conquer may have only come on two discs, one of each having campaign for the two respectable sides. Consoles didn’t have multi-cartridge games in similar manner due to how you can’t just yank the cart from the console without the danger of damaging both the console and the game. After all, there is a live current going through the cart, it is effectively part of the machine itself. Disk and discs are read and not part of the PCB, after all.

Not to say multi-disc games have been gone at any point really. The X360 used DVDs and many of its larger games came on multiple discs compared to their PlayStation 3 counterparts. Lords of Shadow is one, for example, and came on two discs. Blue Dragon supposedly required three. The Blu-Ray Disc, or BD, really allowed just to throw everything on the disc uncompressed. It’s sound files that most often take the space hungry spot, be it music or voices. Mostly voices nowadays. Because of practices like this, game filesizes have been increasing steadily to the point of stupid. Games that are several tens of gigabytes, or perhaps even hundreds, could be shaved down in size by compressing and packing things properly, but it seems that skill has been lost to modern game developers. Maybe it’s because all the tools and engines that are around are readily made and nobody really wants to tackle a problem nobody sees a problem, at least not in the industry itself. Consumers on the other hand tend to groan when they have to wait for several hours for their game to download when it’s a digital entry, not to mention shit has to be installed. I miss the days when I could throw a game inside a console and let ‘er rip, but nowadays I need to sit back and wait another thirty minutes it to install. There’s a damn good reason I keep playing Switch more than PS4 nowadays.

It’s strange to think that multiple discs per game would be a detriment in itself as it has been a standard practice, well, since the first floppy diskette couldn’t hold all the DnD characters some nerd had cooked up during his university days. Reading a bit around, I can’t really find any bonafide dislike toward multi-disc games, but there are some individuals here and there that seem to consider the industry is pushing for digital-only due to lack of space per disc, like Allie-RX, a Youtuber of some sorts. Should we consider multiple discs to be a valid reason to further a push for digital-only materials? Hard to say, but it might as well be one of the arguments, but with modern politics, the argument wouldn’t sway to the direction of lack of space. It’d be about how it is more environmentally more sound to have digital-only, that we’re going to save the planet by not printing all that plastic. Wording which is largely horse shit. As space limitation on the disc, BD XL has 128 Gb of space, and 4K Ultra HD BD discs offer some 100Gb. While we talk about terabytes and petabytes in modern computing as the standard large-scale units, we a game taking over 100Gb should raise an eyebrow and make you question what exactly is taking all that space. As mentioned, it’s largely the uncompressed data on the disc and the lack of know-how regarding compression and packing. We’re well past the era when developers had to develop new compression algorithms to shove everything to a disc or cut down the number of discs. For example, Capcom had to come up new effective ways to compress all sprite data of Mega Man X4 in order not to run out of space. The PlayStation really sucked for 2D sprite games with its limited RAM, and some companies had to come up clever ways to change the sprites in memory on the fly. Then you have companies that want to go for the flashy stuff, like Square and its FMVs in same era Final Fantasy games. Despite their quality and compression, these FMVs still took majority of the discs’ space. If you’d remove the FMVs from the games, each game would’ve fit into one CD just fine. That, I would argue, is where modern mindset comes from. It’s not that there isn’t enough space on modern discs, but that developers don’t need to concern themselves with limitation of space. Much like so many other aspects of game development, space is a thing that has lost its limitation and it is very easy just to let it bloat like a dead body in the water. So much rotten hot air inside, and the colour ain’t really healthy either.

Digital isn’t really a solution to the problem the industry supposedly faces. Not everyone has multiple terabytes of free space on their computers. Some people have the minimum required amount of space bloat on their PCs, some can’t even use external devices in of themselves to expand the memory. It’s a case where we may have all this space in our hands, yet there are surprising amount of consumers limited by it. An easy argument for streaming perhaps, but streaming anything has its own issues. It might be a solution for films, music, television and Visual Novels, but not for computer or console games. There is no real solution to any of this, though I guess HVD would be one if they ever managed to finalise this decade old tech and launch it commercially as BD’s successor, but BD still has life left to it. Still, 3.9 Tb of space on a single disc should be more than enough for all your needs regarding movies or games. I doubt people are willing to pay 100 bucks for a movie ever again, unlike what they did with VHS and LDs back in the day. Of course, the industry could also stop wasting space, but that ain’t happening.

March of the working robot

Ever since the industrial revolution revolutionaised the mass production of goods, machines have replaced manual labour slowly, but surely. The utopia where machines have taken over all manual labour is still currently a pipe a dream, but ultimately it may come to pass, if technology and all related fields keep advancing. The rudimentary tool AI that drives most current industrial robots may seem simple, but that too is mostly a question of time.

Hobbies and industries have evolved remarkably in the last hundred years, even more so in the last thirty or so years. If you wanted to make your own model kit from scratch, you needed to amass the materials and begin to cut and assemble them properly. Nowadays, that work has been relegated to a 3D printer, which simply accepts a model it needs to extrude from its nozzle. This is what essentially what’s at the core of this mechanisation of labour; one man and one machine. This is why some schools around the world have begun emphasizing skills relating to future-world jobs, like coding, in order to ensure that no child would lack the basic skills to survive in the thought modern world. Whether or not this is the best approach is up to the question, but it is undeniable that mechanical workforce is slowly but surely making their way in regions where you wouldn’t believe them fitting in. As is the case, these things usually stat out small and then build from that.

To use welding as an example, welding started out heating two objects and then adding third material to weld the objects together. It was revolutionised when modern welding via high current became a thing. Welding rods made things simpler. That evolved further into feeding a constant wire with protective gas. For some time now, in some cases the human element has been almost completely removed and a robot arm welds as instructed. The human element is there to correct the machine, maybe finalise the product, but not to work the seams the robot is responsible for. The 3D printer mentioned above is this exact same phenomena, and the same thing has been moving towards every field. Objectively speaking, we do not have a need for sculptors nowadays, when all you need is some 3D skills and an access to a CNC machine. A router with a fine tip will always be better than the human hand.

All this is more or less self-evident, but what about work places that require more human touch? Numerous stores have already installed self-service counters for customers to go through, needing to employ fewer workers. Phone service are a classical example, though not all of them work as well as they’re intended to. The issue is of intelligence, as machines don’t have general intelligence that would work and understand. Current AI can compute meanings from library of definitions, but none of them truly understand what’s told to them.

Human touch can be replaced, or at least mitigated to some extent. For example, Paro the Therapeutic Robot made its rounds few years back when every news source showcased how it helped old people with things like stress. The seal shaped robot would require some care to be given, like petting and talking sweet things to it. If left alone, it would begin to whine. Though according to the site, if you hit it, it will learn and cease repeating that action, something I doubt many people would want to be replicated with any living thing. In case of lack of contact with, well, pretty much anything when it comes to old people’s homes sometimes, a robot that responds to your actions does seem like a good alternative, at least for some time. It’s like how some people get a large pillow and put a picture of their cartoon wife on there. It might not be the same as hugging and sleeping with a real being, but human mind is plastic enough to convince itself about a lot of things, like communism being a good idea.

With time, the intelligence of machines might achieve the level high enough to at least understand limited topics. A robot cashier for example wouldn’t need to understand anything beyond what the consumer is bringing to it, scan the products and request a payment. Such robot should be relatively easy to build even with modern technology and would save companies money in salaries. Robots could even fill the shelves, given that numerous warehouses already run on automated vehicles that move things about without much human assistance.

The industrial revolution had its Luddite movement, and Neo-Luddites are a thing. Technology may make life easier and work cheaper, which often is the argument against it; it takes away jobs from the people. Car replaced the horse, and welding robot replaced the welder. This of course always opens new job fields; now somebody needs to make the cars, but the tech evolution has now machines building machines to work. The argument of course is easy to understand, but at the same time technology has always moved like this. Often a tool to make work easier and less strenuous is acceptable to most, but the idea of their job being replaced by something inanimate raises eyebrows. Sure, some fields like medical doctors won’t be replaced anytime soon, though as mentioned, as the fields evolve things won’t look the same. If we want to give all jobs like this the absolute back limit, it would be when general intelligence is created, that is AI which is one human level of intelligence. From there, nothing’s a limit anymore. At that point, not even coding needs to have a human input.

Is this post about personal fears regarding the job market? No, but the observations and discussions I’ve been making during the last seven years alone shows that industries with reliance on hard manual labour probably will see drastic changes in short period of time in the near future. It all depends on the worldwide macro-economics, as such change would need a driving force behind it. As much as some people hate to admit it, both World Wars advanced sciences and technologies in leaps and bounds, and we’ve been enjoying fruits of those labours for some time now. The Cold War drove space tech another set of steps, but after that there hasn’t been much driving us forwards. Well, outside the information warfare that’s constantly raging without us knowing or seeing it. I doubt we’re ever going to achieve post-scarcity world like in Star Trek,

The robot work revolution is not all that relevant in our time, but it’ll get there at some point, if we’re lucky. With all the cuts in education and downgrading everything surrounding it, it’s more likely that future workforce may be able to dabble with their phones more than to calculate how much grams of drugs you should get.

VR has yet to break through

CEO of Unity Technologies John Riccitiello has a grasp on reality concerning both Virtual Reality and Augmented Reality kits. He was speaking at TechCrunch Disrupt held in San Fransisco this month and argued that there has yet to be a true launch of consumer grade VR and AR devices out there.

Price of course is his first point of contention, which is true. Looking at standard local prices, HTC Vive VR system costs 700€, with Oculus Rift being around 550€. That is extremely larger sum of money, especially when you remember that you need to have a computer to run it, adding to the cost if there’s a necessity to upgrade. You’re easily looking at a package worth a grand, which is far too much for just to set up a platform for extremely limited offering of software. VR will stay as an expensive piece of technology until computing technology, and technology in general, undergoes a massive advancement beyond headsets and screens. Computer Gaming Monthly’s prediction from 1991 that VR would be affordable in 1994 has been overshot by two and a half decades now and counting.

Second point is control and function. Riccitiello argues that the user does not have enough control over the systems. The way the input has been designed limits the content it can have. Most VR titles follow the same by-the-rules input and control method with the wands or controller. The best way to enjoy VR at this point is to get a full racing setup with wheel, pedals and a good seat to get the best experience out of it. As it stands now, both Oculus and Vive are using what essentially amounts to newer versions of Wiimotes.

Riccitiello is right that the current level of VR and AR technology is launched for developers. Game developers love to play with the latest technology and dabble with it to see what’s possible and what’s not. Nintendo is a good example of this in general, considering they’ve tried new things with their controllers throughout the years and included a 3D screen for the 3DS. All the tech stuff like this in Nintendo’s products are mainly intended for them to to explore, and the whole VR and AR boom follows the steps. Consumer end is not considered, only what they are interested in.

In Riccitiello’s mind, there has yet to be a commercial launch. The software that’s out there does not meet the expectations or the standards for consumer use. Better technology is worth jack shit if the games for the end-consumer are the exact same we had twenty years ago.


Ride the Comix was a VR game in Disney Quest attractions

The VR industry has grown for sure, but it has not expanded. It would appear that VR has a better market in commercial applications in general than consumer end. The Virtual Reality dream, a headset that could launch you into other worlds, does seem to be more a pipe dream than anything else. As I’ve mentioned previously, it is the 2010’s 3D television boom. However, unlike 3D TVs, this one will survive in some form due to the overall saturation of the market and the sheer force of the pipe dream. Ever since the Sensorama was out in the 1950’s, companies and developers have been aiming to realise something that would be “true” Virtual Reality.

If you take anything from this, VR and AR are nothing new and have half a century’s worth of development and commercial ventures behind it. This is the crux of it; all of it is technological research and development, and even then it’s all extremely limited in the end. Oculus’ latest tech shows what each new VR device has done; expanded on the technology rather than trying to find better ways to do VR.

What does this technological progress give to VR sets it already doesn’t have? To beat the dead horse; there needs to be progress in the software side more than in the hardware.

Is Riccitiello right in that consumer launch for VR has not been made yet? Perhaps not in the current generation, but VR history is full of consumer grade releases. VFX1 Headgear, Victormaxx Stuntmaster VR headset, Virtual Boy and Glasstron all were released for the consumer end, though they were not fully dedicated VR products on themselves. However, that’s where the whole evolution of software would come in, as showcased by Ride the Comix above.

Perhaps the largest crux on VR and AR is that there is no public discourse of them. When Oculus and Vive were new, they were the hottest shit around to talk about, and PSVR soon followed. Hell, some PSVR titles have been patched to work outside the VR goggles to increase sales.

Riccitiello’s positive view on that VR will keep rising is probably right, but the rise will be slow if things won’t change for the cheaper and more efficient. The expectations of the general consumer from what Virtual Reality should do not meet with what the developers’. That is not a blueprint for success, but for stagnation and at worst, failure.

Updating the build-in obsolescence

Sometimes I come across news that just feel stupid. Logitech announced that they will shut down all services for their Harmony Link, essentially bricking the device with an update. Why? Well, they’re out of certificate on technology that’s inside the lil’ smart device remote. This of course caused rather serious backlash on the usual Internet forums, to which Logitech responded that they’ll replace the obsoleted devices to a new one.

This is, sadly, par for the course in modern era. Licenses and certificates from every which way is being implemented in devices that are not though to last. Devices are not thought to last at all, with some companies expecting you to replace your phone yearly. Apple, for example, optimises all their latest updates to their newest models, the old ones be damned, meaning the old hardware gets sub-optimal OS update, which will cause things to slow down and requires more numbers to be crunched. Apple pulled back one of their iOS updates after they released it, as it made older systems inoperable due to inability to make phone calls or unresponsive fingerprint sensors.

Back in the day, obsolescence was designed in the product from the get go. Some film companies even wanted VCRs to wipe tapes slightly each time they were played. This meant, that after certain number of watches, the tape would be blank and the consumer would be forced to buy a new copy of the movie. Imagine if a DVD or Blu-Ray discs and their players would’ve been built so that after certain amount of watches, the player’s laser would burn a mark that would prevent any further playbacks. Apple’s products are full of planned obsolescence from hardware to software, with the customer being completely dependent on the company’s services when it comes to maintenance and repair.

While bricking updates are exactly nothing new, they’ve become more and more common at a steady pace. It has not been profitable to design and manufacture products that would last anymore. We have the technology to make phones and whatnot last a solid decade, but this would mean the companies wouldn’t get that steady stream of high revenue yearly. This may sound overtly dramatic or even anti-corporate, but this is more or less personal experience with numerous companies. The discussions I’ve had with professional from the industry who have worked in different fields of productions, from the cases to the software, all have said the same thing; it’s cheap. The outer shells cost barely anything to tool, the electronics manufactured and fabricated at a very low price in countries that don’t care about certain legislation issues, assembly is done in an area where pay is extremely low and people are prevented from doing suicides via nets. Shipping per unit costs absolutely jack shit, coding is done to drive the latest things up and probably is the second most costly bit after advertisement. It is the name that drives the price up. Hell, the lack of earphone jack and other physical properties in more modern phones nowadays is to drive the production price down while the sales price is jacked up.

The only thing that ultimately costs is the brand. iPhone X costs a thousand bucks to buy, and it has nothing to justify its price outside the Apple logo and branding. The profit margin is extraordinarily high. I won’t even try to calculate the production price, but a good guess would be that the production costs are hundreds times less than the final sales price. But hey, if people will pay for it, then that’s the rule of the market.

That veered a bit off the topic, but it’s relevant. The core problem in updated obsolescence is that it will be everywhere. Smart homes are not all that common nowadays, but the more we will have such devices on our homes, from freezers and microwaves to simple light switches. If any of these devices use similarly certified technology that has been essentially licensed from outside, they will face a kill-update. All these smart devices will contain programs and services, which the companies see as the main sales. From a company’s point of view, they’re not really selling you an item, but the service the item will enable. In this sense, the consumer is purchasing a long lasting license to their service via this device. From the customer’s point of view, they’re paying for a device that enables a function, like the smart device control with Logitech’s Harmony Link.

This disparity is clear in gaming as well, where companies and some consumers argue that nobody is purchasing anything anymore. Rather, you are subscribing to a service with one-time payment. However, nobody can come to your home and tell disable your games. Unless you’re using Steam.

If we’re to believe this tight device cycle will stay for the foreseeable future, it will also cause another issue to build up. Apple alone is responsible for a huge pileup of e-waste, and if we count all other electronics companies with similar pace of new product introduction, we’re getting large quantities of products that will not last long. Africa probably feels the brute of the hit from this, with tons of e-waste being dumped in Ghana’s landfills.

The first step to fight this cycle would be sustainable development and design. However, the core principle of sustainable design is against most corporate interests, as it dictates that a product should be designed to last as long as possible. However, a phone that would last a decade would not be as profitable compared to a phone that gets the shaft after two years.

Logitech’s response to the outcry of their kill-update isn’t any solution. The Harmony Link will become obsoleted not because the devices have broken, but because the company chooses to terminate its function. The action is not a solution, but a pathetic way to weasel out of it. This is not sustainable design.

I’m not an Earth hugging hippie by any stretch of the imagination you may get from this post, but sustainable development and design are two key factors that need to become more relevant as the time goes by. We only got one Earth, and seeing we’re not getting off this world any time soon, we should take better care of it.

Technology, consoles and computers

One way to determine a difference between a console and computer gamer is how they approach their respective platforms. For console gamer, he approaches the games first and foremost, giving them a platform and concentrating on those themselves. A computer gamer however concentrates on the specs, citing everything from the RAM of their gaming rig to the maximum resolution their monitor can output. Essentially, the software versus the hardware.

Because of the mixing of computer and console games, the approach between the two has become just as mudded. More often than not computer games used to drive the users stupidly insane, as they had to continue updating their rigs to play the latest Ultima or Privateer. Not only it demanded money, but dedication most of all. For console gamers life was easier, as all they had to worry about were the release dates and whether or not they had money to purchase the game, or in case of games like Zelda II, if they managed to nab a copy for themselves.

Console games are tailor made to run on a console, using the best possible capabilities it offers from controllers to whatever the hardware could do. It is far more easier to produce a computer game, as it has very little limitations what you can do with it, outside the whole keyboard and mouse controls. Whether or not you prefer those over a controller is up to opinion, even thou one can make a proper argument about the tactility of controllers and their form fitting.

Recently SONY admitted that the PlayStation 4 was not up to the technological higher end it was meant to be, but this is an oxymoron. Consoles have never been at the technological high end. Because of the technological development and the solid nature of the consoles, the moment any game console is released it holds outdated technology that can’t be upgraded. However, SONY is absolutely right in that PlayStation 4 not being at the top of the line has little do with game quality. Over and over again we have seen the consoles with lesser specs beating their more powerful competitors. No, Super Nintendo is not an exception in this. Mega Drive has 32X and Sega CD in the end, putting it on the higher end than Nintendo’s reverb filled console.

Lately, we’ve been hearing a lot of roaring from the computer gamers about the 4k displays. Your normal user doesn’t care about that at this point, nor has the 4k displayed the other HD displays at this point in time. Display technology advances at such a high rate, that buying a high end television once every twenty or thirty years should suffice you just fine, unless you’re a huge tech fanatic.

For the computer side, the 4k is another hardware issue the love to discuss how they would be able to make use of them the best possible way. For a console gamer, the issue is not relevant, at least not yet. This is because the 4k sets are not common in households. The companies have released numerous versions of their sets offering 4k support, but as with others things, there’s very little reason to put money now into 4k television when there’s very little that supports it. Much like with almost every console launch, the first years of any new technology sees little adoption before the gradual shift either makes a household standard or something new comes along and beats it. The change from VHS to DVD is an example of rather rapid change in both industry and household standards. SONY wished to replicate that success with the Blu-Ray format, but the jump from DVD to Blu-Ray has been far more slower. Even with DVD, the early players were rather low in quality and some early players simply can’t play the latest discs because of the technology differences, and then you had the fact most DVDs were low quality VHS transfers. There are instances where the Laserdisc edition was superior before a proper digital remaster came along, or in few rare cases, a Blu-Ray release.

There is also an issue of worldwide markets. The cultural values regarding technology varies massively and not all areas simply accept the new technology as the best. That said, the opposite applies as well. All we can really do is to individually wager whether or not it is worth purchasing potential rather than proved practicality. Early adopters always purchase for potential, and there are times when they simply bet on the wrong horse, much like with the BetaMAX or HD-DVD.

4k displays may be latest of the tech, and we all know that in a year or two we may be hearing about something that makes the 4k a moot point. Actually, there already exists numerous higher resolution standards than the 4k, like DCI Standard and the 8k FUHD. New ones will come along. Skipping a technological step isn’t anything new, and majority of consumers simply skip the things they don’t consider as worthwhile purchases. I can assure you that while 4k displays, objectively speaking, seem to offer better visual experience, there are those who simply don’t care and are fully content on using their current sets due to variety of reasons.

All that said, when would we see actual use for 4k display sets outside video games? NHK has announced Super Hi-Vision broadcasts for Japan in 2016. Eutelsat is Ultra HD dedicated channel that is already operating. 2013 and 2014 have been the years when Ultra HD has made its impact mainly within the industries. The end-consumer hasn’t really seen anything worth purchasing outside the potential. Then again, we had people announcing that 3D would be the future, and the boom ended up being a simple whimper.

First adopters purchase for the possibilities products offers, whereas most other consumers base their purchase on what already is offered. The compute gamer most likely would put his money into a 4k display in order to keep himself atop the hardware race, whereas a console gamer wouldn’t need a 4k set before something worthwhile would come into play. Even then, when something has a 4k support, there’s always the question whether or not the content in itself is worth the investment.

Different take on customers; for the love of God learn how to use it

Why does this program ask me this? What is this message that Windows is showing me? Why can’t my phone do this? Why can’t I tweak my Mac for better performance? Why is there a virus in my computer? Why won’t this computer work? These are questions that I’ve heard too many times, especially the last one.

Self-repair manifesto
is something I expect everybody to follow to a limited extent. The idea of can’t fix it myself, can’t own it is a bit extreme for the common folks out there, but it has the correct core in there. While I agree that some things are beyond the repairs of a mortal man and better left for fixing gods at your local shop, I’m truly expecting people to know how their devices and household items work. A surprisingly small amount of people know how their vacuum cleaner or microwave oven works, and that’s a bit alarming. In cooking, if you know how stuff works and what they do, cooking becomes both easier and much entertaining in its own rights. Then again, cooking for one isn’t the most riveting thing to do. Trust me on this. 

I recommend everybody to open some of their devices and just take a look inside what they have and just take a look at what they have inside and familiarise yourself with it. See where the power switch is, what kind of chip is attached to it, what things are in the way and how they’re all connected. Using a reference guide on what certain parts are helps a lot. For example, knowing what is a capacitor and what it does helps on the long run. If one blows up, you might want to learn how to solder in order to replace one and fix the device by yourself. Soldering isn’t hard to learn, but just like everything else, it takes some training to get the idea and become good at it.

With computers in the software side I can only blame people who never wanted to know how their system works and just want to use it without anything getting in their way. Windows Vista’s infamous security system which asked if you really wanted to do something was a direct result of people not understanding what they were doing. If something is made foolproof, it seems that its utility is almost completely lost. This in most cases also prevents the user from making tweaks and adjustments for the device as they see fit and modify it as they like. It’s pretty stupid to think that the more simplified systems get, the more text and holding the users’ hands we get, which just pisses other people off.

Windows 8 is actually a good example of this. Where Microsoft wanted to go with Windows 8 was to have it more open for the common folk who were using tablet, but what they designed was one of the worst interfaces I’ve seen in a long time. It’s a horrible GUI (look it up), even for tablets. But no, certain groups within Microsoft thought that it was best idea to make everything more simple and easier to understand, which ended up with the version we have. Honestly, Windows 8 is horribly designed, especially in home PC use.It’s just so awful to use, switching between two views and neither is completely supported. Microsoft really dropped the ball here.

And you know the reason why Microsoft thought Windows 8 was a good idea? Because there is a bunch of stupid people who just don’t want to learn how to use the goddamn operating system. In other words, the customers are stupid enough NOT to want to get into what they’re using.

I’ve said that I’ve got nothing against Apple products, I just don’t like how closed they are. But for the love of Quantum conductor, they are not any better than the competing product. You’re just too damn inept to learn what to do with them. Most Apple products, like the Macintosh PCs, are a good example of decent balance between openness and closed system; you really can’t do anything to change or tweak it, but on the other hand everything works just fine most of the time. If something goes wrong, then you’re screwed and need to contact Apple services for help. Oh but with PCs everything just crashes all the time. First, I hope you realize that Macs are PCs as well. Second, no they don’t if you know what the hell you’re doing. No buts.

Things just get more closed and stupider the more the customers refuse to understand what they need to learn in order to use different products. It’s insanely grating to think that we used to pop in a VHS cassette and press play. That worked. Now we pop in a DVD and I hear people asking how they can get into the movie. There are DVD menus that are clearly telling you what to do, and I still get a call every single week from selected people asking me how they get proper subtitles. [What in the name of fucking god. Even I never did that.Edit] It was so easier with older media. Modern media, for better or worse, asks the user to get into what the hell they’re doing.

Then again, people still don’t know the universal markings for PLAY and PAUSE. For the love that 00-Unit has for us, please learn those at least. Standardised markings exist for a reason, and that reason is to make your daily life easier.

But no, when customers are dense motherfuckers who refuse to acknowledge that there’s something wrong in them, the shit hits the fan harder than a G-Bomb. And we’re supposed to design these people a product that would be easy to use. Existing products WOULD be easy to use you would just read that one damn comprehensive manual and apply that knowledge to other similar products with little effort, trying and research. I will continue to develop and design better products for your use, but you need to meet me half-way and put some effort in there as well. Otherwise don’t blame me when I design you a house that works like 1984 police state and dictates everything you do and how you do it in order to ensure that things work as intended.

I hate that analogy. A product should be something we all can use as we want. Misusing a product or using it wrong is customers’ fault and nobody else’s.

Then again, we have shitloads of free information on the Internet and in the libraries for t people to use, and it feels like nobody is doing any goddamn research.

Wii U’s not looking good, SONY’s junk and Microsoft is…

… in pretty damn deep trouble it seems. For some time now I’ve said that Microsoft should concentrate on their strengths on PC and allow XBOX to be their tertiary objective or abandon it altogether. Because Microsoft has done the same thing SONY did ie. concentrating on the damn video games rather than on what they know to do well, their nightmare might come true. Or rather, it might already be in some form.

Let’s go point-by-point.
Are the pads eating the PC markets? Yes, but only if we assume that the iPad and other smart devices are something else than PCs. Pads and smarthphones are not some sort of magical thing of their own, they’re as much PC as laptops. The proper way to put this would’ve been that non-Windows based PCs are eating away from Microsoft’s share.

Are employees really converting away from Windows based PCs? I’ve discussed this with people who work on government facilities, and they do admit that there’s iPads for certain purposes, but majority of the work is still made on machines that have actual keyboard. As such, while iPad is certainly taking its place in the work environment, it’s way too early to say whether or not it will completely replace Windows. Perhaps a sort of paradigm shift is happening, where we are going towards more LACRS devices from Star Trek.

I wouldn't really oppose those, but there will be a lot of people who will never get used to the lack of tactile contact
I wouldn’t really oppose those, but there will be a lot of people who will never get used to the lack of tactile contact. There’s also the position where you type, most of the time is awkward if you’re using two hands

Now the third point is completely true and Microsoft can only blame themselves. The GUI of Windows 8 has got a lot of hate for a reason. For one, it abandons a lot of functions that users expect from Windows. it doesn’t help that Microsoft has emphasized on the touch-screen function a lot, and if you’re a home PC users this functions becomes more or less completely useless. When I gave the retail version of Win8 a throughout testing, it was clunky, rather horrible and felt that everything sank beneath something. Microsoft should have looked back what worked and not remove whatever they liked. Will Windows 8 replace Windows 7 like all other versions have done previously? It’s hard to say, but I’ve heard rumours that this wouldn’t be the case in every institution.

Windows 8 is also the reason why loyal developers are moving away from Microsoft. Technically this point, and all other up to point seven, can be crunched into one sentence that sums it all up; Microsoft dedicated too much of their attention to XBOX and 360 that they forgot where their true business and strengths were. Because of this Windows’ development has taken a hit and clearly the company has lost their sight on what the hell they’re doing. You can see that a lot of Windows’ properties has changed since the XP for worse, thou there are truckloads of improvements as well. Still, we all can agree that something was never right with Vista or 7, and we call agree that as light and efficient as 8’s core might be, it’s functionality is pretty damn awful (unless you train religiously on it.)

It’s no surprise that as Microsoft is losing in their main front, the XBOX series is also suffering. XBOX doesn’t bring in any money and only spends whatever Microsoft has made in the 90’s and early 00’s. There has been some reports about Microsoft making loss with every 360 sold, so if Microsoft would lose their main pillar, ie computer OS monopoly, the company would be in very deep trouble.

If Microsoft won’t get their business together, the worst possible scenario is that Windows loses its place in the work environment and the next XBOX will bomb worse than the PSVita. It’s their third time trying it, and I really hope their get it right. However, the blowing winds tell me that this won’t be the case.

However, while we certainly can judge MS and SONY at this point, the earliest point we can say anything about the Wii U’s success is after the holiday season. I’d put that somewhere around February or March.

A matter of ease of use

You open the case, pop in the cassette into the player and press Play. It’s that easy to use a VCR. With disc format it has always been a little bit more convoluted; you open the case, pop in the disc, navigate the menus, and then possibly you are able to press Play.

Convoluted is a keyword when it comes to design of use. You want to avoid it as much as possible. However, making things far too simple in certain fields also produce results that nobody really likes. VCRs are in this regard a shining example in concept, as they require minimal input from the customer to access the purchased content and get the most out of the player’s basic function. However, the design of different players add loads of different features from different speed recordings and numerous other functions I’ve already forgotten about. Nevertheless, this kind of simplicity is always good as one of the goals to achieve in any design.

DVD and BD are a little different matter. Personally I do not find them any worse than using any other playback device in their regard, but I can’t but help to think that something went wrong when designing their interface either on-disc or in players.

While I have nothing against menus in disc formats, I do have a lot against their design interfaces in most cases. How many times have you found a disc that has loads of stupid design choices either visually or in function? Simplicity should and ease of use is lost in some discs, and I have to applaud to a lot of cheap releases for having the right idea of putting the budget into something else than damn menu screens. But no, almost all of the bigger releases have animated menus, menus with music, menus with selections that make no sense. I recall having a disc somewhere, where the menu selection where allocated into the four corners of the screen, and it made no sense how the indicator went between them. You’d expect it to move from upmost left corner to right and then down, but I can assure you that it didn’t. It shouldn’t be that hard to make simple intro screen, perhaps with a theme tune playing in the background, and simple to use menu.

But simplicity also can cause problems. With disc and cassette players it’s easy to make them function with minimal amount of effort from the user, but something like a computer operating system is a different kind of beast.

Windows is arguably the best operating system out there, otherwise it wouldn’t be as widespread and abused as it is. However, with every new iteration new problems rise as they have made Windows into more something that everybody can use.

There used to be time when using a computer required actual knowledge on how the hell you use a computer. I’m ashamed to admit that I’ve forgotten loads of DOS commands that old computers needed and I used to be able to code in Pascal quite well, but with these new machines these kind of skills are unneeded. As Windows has been moved more into realm of ease of use, all the most complex functions have been moved behind the curtains of the operating system, thus the user is mostly unable to access these. In older systems you were able to access these for your own pleasure, and thus change various settings for your liking.

I do see and value Apple’s aim with their computers, but making their machines completely closed isn’t really the best option. If anyone would bother making a proper virus to a Mac OS, pretty much every Apple machine out there would be under serious threat. History has shown that pirates and hackers put their effort into systems and machines with greatest value, a reason that one should take notice why either PSVita and 3DS has not been hacked yet; it’s not that it’s impossible, it’s just that it doesn’t have enough reward to do so.

As a designer my aim in most cases is to lessen the customers’ stress to learn something new to properly use any device. However, sometimes it would be better for the customer himself to take matters in their own hands and learn to use something more advanced than just pressing play button. I would really recommend everybody to learn their computers’ OS better, what it does, what different term mean and all that, because then you are able to use your computer better and closer to its full potential. Same with every other device and system out there.

However, that doesn’t allow the designer to make convoluted shit.