Your own brew of mead

The reason I tend to compare developing electronic games to cooking is that even with the right ingredients you may fail miserably for countless of reasons. This could of course be extended to pretty much any field that requires design, but for the sake of this blog we’ll stick with games.

However, the main obstacle with this comparison is that everybody capable has to learn how to cook something in order to produce food for consumption. We’re side-lining all the modern brouhaha of microwave dinners, because even then some preparations is required. In general, very few people are not inept within the kitchen and are able to use the oven and other appliances for some cooking.

This is not exactly comparable with game development, as one can argue that it takes longer to learn a coding language and create assets for a game. This is of course under the assumption that we have single chef compared to a sole developer. While food is a necessity, games are not. They are a common luxury item to many of us to the point that we barely even realise their worth and are willing to push their value down by any means necessary while expecting high enough production values. To be fair, this blog tends to argue that developing and publishing games as become too costly and grandiose, and should be scaled back and return to form. Video game industry does have its own Hollywood, and the same cores have taken effect; the committee.

Hollywood blockbusters tend to be described committee movies to an extent, with loads of people from marketing and higher ranks having a check-list of things that need to be included in a movie due to statistics and research showing that this and this age group and this and this audience likes certain factors in a given movie and genre. They’re not wrong either. The very reason you have dozens of different flavoured products is that people like certain things in a certain way. The movie being here the whole of pasta sauces and each variant an ingredient of a given movie.

I admit that this blog does emphasize the whole statistics perspective quite a lot, perhaps to a degree that it has given a hidden bias. However, trends are made to be broken, and it’s not beneficial to look just what the paper says works. Ultimately, this approach will only yield one design, one style product that will be repeated to ad nauseam. Film trailers tend to be a prime example of this, where they follow what was proven to be popular to the point of each trailer essentially having the exact same blueprint independent of the movie, genre or studio. An example that pissed yours truly off few times around was the BAWWMMM sound effect from Inception. Let’s not forget the distorted booms and stuttered downers either. Guardians of the Galaxy did set up a new trend for comic book movies with its use of music, for better or worse.

Nevertheless, there’s very little reason to fix what’s not broken. That’s not to say we can’t make previous items obsolete. In fact, we can make any design obsolete as there is no one perfect product out there. Well, the only good contender for that title is the Kikkoman soy sauce bottle, but we’ve talked about that already. Here’s where the whole thing about your own brew steps in.

At home you can produce your own mead or vine and have it taste pretty much perfect to your taste. Same with cooking and games. However, once you step into the market place, your tastes apply very little.

That is to say, whatever a designer thinks may be a great product for the user does not tell whether or not the product in reality is a great product for the user.

We can’t completely separate personal preferences from the cold statistics when producing a product, be it a game, book or food. They need to be married together in harmony that will push through the personal investment as well as present it in a fashion that is relatively easy to digest. This is not an argument for dumbing down, this is an argument for creating a product that comes half-way through to the consumer so that the more intricate aspects of it can be absorbed.

To use an example, we had plays before movies. The jump from one to another, while somewhat drastic in technology, ultimately relies in same core elements while having its own identity as a form of media. Further there we have movie genres, techniques and so forth that have taken the field onwards, both in terms of visuals and storytelling. Consumers have easier time adopting new movie formats and ways things are set up due them being accepted by the consumers, and most of the industry people, to contain valid values across the board.

The same goes for video games, albeit due to harsh crevice existing between the Blue and Red Ocean markets it should be noted that the trends valued in one does not necessarily meet with the other. These values are not just technological or design aspects, but also philosophical and political. As video games are escapism (undervaluing escapism as some sort of lesser act or even detrimental is a topic on its own) there are subjects and things that can be handled well, and that can be forced from a certain perspective towards the player.

The problem here is that the biggest sin a game can do is to take control from the player without their own action, e.g. for a cut scene. This lack of control is best shown in RPGs, where some games tend to showcase a topic through one facet only and ignore all others, deciding for the player in black and white terms what should be done and how. This sort of railroading is done for the benefit of the story and detriments the game’s play.

I would argue that in both game industry and in Hollywood the execs and marketing departments should lift some of the pressure off from the developers, but at the same time these creators should not ignore the audience’s wishes and wants (while aiming for needs.)  The mead you brewed might be the best shit you’ve ever tasted, but your neighbour probably thinks it tastes like piss.

No, this does not need to be in

Consumers purchase what they like. No sensible person would put their hard-earned (or Patreon) money into something they don’t deem worth the effort they’ve put into the work they’ve done. Corporations exist to make money and the way they make money is to produce goods and services that interest, are in demand and are wanted by the consumer, and thus the consumer in the end dictates what goods are produced by their use of money.

However, no organisation is ever required to make anything the consumer wants. They don’t need to include elements that would hit the consumer consensus. That is if they don’t want to make any profit on their product.

To use an example, the non-controversy with Ghost in the Shell‘s lead being Scarlett Johansson irked some, while most of the rest of the consumers didn’t give a rat’s ass because of two reasons; they had no prior experience with the franchise, and they’re not obsessed by who acts. Johansson has star power behind her that attracts the general consumer and has shown to be a capable action movie star from time to time. So for a company aiming for profit, this is a natural selection over less known actresses. After all, the licensed company has all the power to decide over the product, and the decisions made will be reflected in the box office. At no time they are required to pander to an audience, for better or worse.

To take this a bit further and dwelve in the subject, at no point there is any reason to create a cast of characters of diverse background in a given movie or a work. This can be twisted in multiple ways, but be sure just to take this as it’s said; the provider can do whatever they like with their product. The only way to really change what is provided is either by making it a more viable option for profit, or produce a product that fulfils that niche.

Just as companies like Twitter and Facebook can run their business in whatever way they like, just as much the consumer of these platforms can decide that their time and money is better spent elsewhere. The discussion what is moral or what are the responsibilities of huge platforms that have become part of everyday life to some extent is a discussion for another time. However, perhaps it should be noted that companies do tend to be on the nerve of whatever is on the boiling surface of social discourse and will take advantage of this for either direction. Pepsi’s recent commercial with a protester giving a can of Pepsi to a police officer as a supposed gesture of friendship, while on the surface wanting to comment on the event (which can be read oh so many ways) is ultimately advertising and showing signs towards certain crowd. It’s PR management after all.

It goes without saying, if someone thinks there is a market, for example,  for a certain kind of movie with certain kind lead actor, surely they’ll tackle this market and rake in the profits themselves. That’s capitalism, after all. Finding a niche to blossom in is the best way to climb to the general consensus. This is not Make it yourself argument. A niche that has demand is usually filled by those who know it exist and have a little know-how to tackle the market. The know-how can even be purchased nowadays thanks to all the companies and individuals offering market research and help in putting up a company.

All this really ends up with the good ol’ idea of wallet voting. You buy what you like, you don’t buy what you don’t like. I’m told time and time again that wallet voting doesn’t work, and every time I have to respond in laughter; it does work, more people just vote against your interests. This is consumer democracy that is decided through free use of money. However, there is a problem within this. There is always a demographic that wants to control a product or field of products without consuming the product itself. This twists the perception of the provider to an extent and can even prevent production and release of a product that would have otherwise faced no problems. The past example of Grand Theft Auto V being pulled from stores is an example of this, and maybe the whole issue with Dead or Alive Xtreme 3 should get a shoutout.

A product that sees most sales doesn’t mean anything else but that the consumers deem it valuable enough of their money. Whatever other reasons may be behind the decision to invest money into a product is up to an individual and a separate study for these reasons should be conducted as they are not something that come up through raw sales statistics. Often you can’t even deduce what sort of consumer group has put their money in a given product, outside what the product itself promises.

A traditional corporation would aim to invest into a development of a product and its sales to rake in money to fill the pockets if their investors and pay the workers, as well as to put money back into further development of future products. This of course requires the consumer to value the product first of all. However, in recent years there has been providers, especially game developers, who seem to consider their right to be paid and gain success by the virtue of them providing something, be it in demand, wanted, needed or not. Naturally, if your product does not meet with the demands of the consumer, you shouldn’t expect high profits.

Of course, you could claim to be a stereotypical art-type provider and do your piece for the sake of love of it, to express yourself to the fullest and never see a dime.

This is not to say a provider can’t make something described above and make money. Finding the right balance between the thing you want to do and providing the consumers is tricky business, but not impossible. It just takes two things; hard work and research. Guts is optional but recommended.

As you might have surmised, this topic was originally supposed to be part of Another take on customers series of posts, but we’re good 40 posts away from our next hundredth post. Thus, decided to timely put this down now rather than forget the content I had scribbled down into a memo.

The Thing of remakes

Remakes seems to be a subject I return yearly. This time inspired by a friend’s words; Remakes of great movies have an almost impossible task to improve on the originals. I’m inclined to agree with him, and the same goes for video games, generally speaking. Even with the technology gap between now and a game from e.g. the NES era, it’s still a task that rarely is done right.

I admit that the requirements this blog tends to set for remakes, mainly that they need to influence the culture of gaming in some significant way and create make the original completely and utterly, are almost far too high standards to meet up. Almost is the key, as if you’re not going to make something better than the original, why make it at all?

The same applies to movies to a very large degree, even prequel remakes of sorts. John Carpenter’s The Thing is probably a good example of this, to both directions. Originally a novella named Who Goes There? in 1938, it was adapted to the silver screen for the first time in 1951 as The Thing from Another World, just in time for the 1950’s boom. While Carpenter’s 1982 version is far more true to the original novella, it still draws elements and inspirations from the 1951 movie. The two movies show what thirty years of difference can do in movies. While the 1982 movie obsoletes the 1951 in pretty much every way, it could be argued that it’s worth a watch for the sake of having a perspective. However, it does lack the signature element of the Thing itself; mimicry. Then again, perhaps it could be said that Carpenter didn’t remake the 1951 movie, but stuck with the source material all the way through.

2011 saw a new version of The Thing in form of a prequel, but it’s essentially a beat-to-beat remake of the 1982 movie. Opinions whether it’s a good movie or a terrible one is up to each of us, but perhaps one of the less voiced opinions is that it was unnecessary. Much like other side stories, prequels and sequels that expand on story elements that never needed any expansion and were best to be left as they were. After all, we’re curious about mysteries that are not wholly elaborated on, but often feel let down if that mystery is shown to be terrible. I’m not even going to touch the PlayStation 2 game here, it’s just a terrible piece.

Both games and movies stand on the same line with remakes; they need to have the same core idea, core function if you will, and create something more era appropriate. One could argue that Mega Man X is a good remake of Mega Man. While it has a new lead, new enemies and stages, it evolves the formula and tackles the franchise in a new way. The idea is still the same nevertheless; beat a number of boss robots in an order selected by you and then advance to the multi-levelled final stages before you face the mad last boss.

However, both Mega Man and Mega Man X got remakes on the PSP, and while we can argue whether or not they obsolete the originals, they are pretty much beat-to-beat replicas with some new stuff bolted unto them and do no deviate from the source material jack shit. This isn’t the case with the Ratchet and Clank remake, which opted not only to change things around, but changed them so that it could have been a completely new and independent game.

Perhaps this is where we should make a division between reboots and remakes. Maverick Hunter X is a remake whereas Ratchet and Clank 2016 is a reboot. Reboots can and often do change things around to fit this new reimagined world. That’s one of the reasons why reboots don’t go well with long-time fans, as it would mean the series they’ve been emotionally (and sometimes financially) invested in for years is no longer the same. There’s an 80 minute video that goes over how Ratchet and Clank‘s reboot missed points from the original game. If you’ve got time to kill, it’s a good watch. Especially if you’re even a passing fan of the franchise.

Mega Man as a franchise is an interesting entity that for almost two decades it had multiple series and sub-franchises running alongside each other. While Battle Network could be counted as a reboot in modern terms, the 2018 series will probably be a total franchise reboot, at least for the time being.

The point of reboots is somewhat lost when the end-product does not stand up to the comparison to the original. Some claim this is unfair, as the new piece should be treated as its own individual piece without any regard to the original. There can be validity in this, if the product can stand on its own without resorting on winking to the player about the previous incarnation. This is a two-bladed sword; on one hand it’s great to acknowledge the history your remake stands on, but on the other hand any sort of reliance devalues the whole point of a remake. It’s a line that needs to be threaded carefully.

Perhaps the thing with remakes (or reboots for the matter) really is that they are facing a task larger than just the original product; they are facing the perceived value of the product from the consumers. People tend to value things on an emotional level a lot more despite their faults (like yours truly with Iczer-1)  and when something new comes into play to replace it, our instinct tells us to resists. It doesn’t help that most of the remakes and reboots then to be terrible on their own right, even when removing from the original piece. Just look at Devil May Cry‘s reboot, which luckily seems to be just a one-off thing. Maybe remakes like this are needed from time to time to remind us that capturing the lightning in the bottle twice is far harder than it seems, and perhaps creating something completely new is the better solution.

A Necessary Higher Price?

Whenever you visit a craftsman’s workshop, be it an artisan, wood craftsman or whatever else, their shops usually have a decent range of items from something that may cost five to fifteen euros to the proper items costing from fifty euros up. It should not be any surprise that the most selling items are the little trinkets and jewellery, as their price most often are from the bottom up. The price is nevertheless higher compared to the production costs than on anything else in the workshop, and that is due to necessity.

Wait, isn’t this blog supposed to be pro-consumer? Is this a hundredth post? No, and this is pro-consumer. The more information the consumer has the better. Nevertheless, we must consider reality as well. The big item orders and their several hundred or thousand production costs and installation may not bring in large income in the end. Maximising profit is any business’ main goal, and an absolutely necessity for smaller companies or individual entrepreneurs. By minimising some production costs and maximising the price the consumers are willing to pay, a person can maybe gain a living.

For example, small full metal jewellery, like crosses and such, are of one or two millimetre thick steel. Their shape usually is either something slightly original or from the general consensus of what looks. When mass-produced, their production costs tend to me low, as you can get them laser or water cut at a very low price. Adding some of your own flavour, like hammering the surface and painting it black, often produces a look that looks like the jewellery was hand-made in a forge from a piece of steel. Production costs for an individual piece might be something like to two euros (perhaps five with modern cost of material, though I know cases laser cut jewellery has cost as low as 20 cents) and the final price tag on the item might be either fifteen or twenty euros.

An example of a hammered product with a failed paint application

The reason why small items of relatively high price in comparison to their production costs exists is because they sell the most. These trinkets are often gifts that fit in the pocket and might look a bit special, especially if they have some local flavour to them. They’re also great for impulse purchases, as the low-cost seems almost insignificant compared to a hundred euro candelabra next to it. If all the work is done locally, the price won’t even have big chunk of logistics in it.

Of course, the price wouldn’t be that high if people weren’t willing to pay. The consumer rarely considers the end-price their willing to pay in terms of logistics, raw materials and work put into the product. The perceived value of a product weighs more in the end over more practical and solid information. The fact is that we as consumers pay what we consider to be valuable to use (or to others depending how much you want to impression people with your new shit) and modify our purchasing behaviour accordingly. Trading card games are great example of this. While the cards themselves are practically worthless pieces of cardboard and ink, the perceived value of their rarity within their specific games or their usability in a given deck gives them a high market price. Rarely you see a card being high in price because it has exceptional artwork or the like. The value of these cards also tend to shift rather quick as formats change, something that yours truly is not keen on.

Another though a bit different example of maximising profits while cutting away production costs is the lack of headphone jack in smartphones. Even when some phones nowadays lack the jack for traditional headphone gear in favour of wireless pieces (that frankly tend to outright suck in utility), the end price of the phone is still the same. The Wirelesness doesn’t excuse the same price, as Bluetooth is a standard in modern phones across the board. In cases like this we can question whether or not it’s just or acceptable for big companies to keep the same sales price for their phones when their production costs have seen a cut. After all, we’re not talking about a trinket here, but a several hundreds of euros worth of money.

The question whether or not upping the price like this is ethical towards the consumer is somewhat a moot question. On one hand it is true that in an ideal world products wouldn’t cost much more than what their production costs, personnel salary included. In reality this doesn’t really work due to how life tends to kick us in the balls. Profit is also necessary in order to gather money for industry related projects, additional raw materials, new equipment and so on. Profit doesn’t magically end up in a bank account as a plus mark. I’m sure all of know the feeling of wanting, needing to expand on something that you directly need, but simply lack the budget for it.

This can turn into purchasing politics very easily. While voting with your wallet is essentially the best way to hurt a provider (even a 10-15% drop in sales with video game sequels sounds alarms in companies) but is also used as a way to show support for whatever reason. DLC, especially visual flavour DLC and the like, is like these trinkets. Producing them doesn’t cost much at all while their pricetag can be surprisingly high. Again, this is just minimising costs while maximising profits. A consumer may buy these trinkets just for such perceived values as they’re just cool to have within a game as options, or that the user has a “complete” game in their collection with all the extra stuff and thus feel satisfaction through this, or just because they happen to like the developers and wish to show some support by providing them with further sales. Not really sure how much I can personally encourage buying any DLC to a game,  but that’s something any and all individuals have to decide for themselves. It is a question of opinion in the end, and all of us have the right for our own.

Censorship is not transformative

While it may seem at times that this blog is against art in some ways, the reality is that I am against the wild use of the term. Not everything needs or deserves to be art to be a highly valued cultural commodity. This blog largely defends the rights of creative industries and their aims to create works. However, I also come from the consumer perspective, where the creator often needs to take into account the market’s wants and needs in order to succeed. Needless to say, this entry is going to differ from the usual writer’s persona a bit.

Censorship is not that.

If an author intends his work to be in a certain way and releases said piece in its intended state, it is not the job of others to come and change that product to fit themselves afterwards. If we are to determine art as a way to express oneself, no one else should have a word how or what the creator wishes to express. Censoring or changing one’s work, but not transforming it, is essentially infringing a core element of art itself.

A product is transformative when an original piece is taken and given a new form. For example, Youtube is filled with videos that fall under transformative label, as they take existing videos and sounds, creating something new based on them. MADs fall under this same category. They do not infringe on the original author’s intent since the original is still there, unaltered. Hollywood seems to have hard time grasping this thing.

To argue that censorship would be transformative is nothing short of incorrect, as it is intentional suppression of any element of a work as seen by any faction or person for whatever reason, be it political or due to supposedly objectionable content. Censorship does not transform elements of a work into a new one, it simply removes pieces it doesn’t like. It doesn’t transform the work; it doesn’t derive anything new from a work.

While human history is short in the cosmic scale, we’ve still had numerous works that are significant to our world and cultural heritage. Many of these are under the gun of censorship, especially nowadays when bikini clad women in games are seen as worst sort of offending material there is. Some even argue that Shakespeare should be censored to be more timely.  What a terrible waste that would be. Even when we would remove the Immortal Bard from the equation, the fact is that his works are significant both culturally and historically. Understanding them is to understand the time they came from as well as modern English as a language.

Censoring the likes of Shakespeare for whatever reasons, or Mark Twain for the matter, is showing every sense of lack of belief and confidence in the people. Essentially, removing nigger from Twain’s books shows that the factions doing the censorship has no faith in the people to make the distinction between the era when the book was written in and now, or that the term is used in form that offers no offence. It is unfunny irony that Huckleberry Finn would see censorship in this way. Often the intent of censorship in cases like this is for a more positive and “fitting” release of the work for a given era, but as it always is, the path to hell is paved with good intentions.

If one were to argue that Shakespeare’s King Lear is a copy of the legend of Leir of Britain with elements from the Holinshed’s Chronicles, I would argue back that it is not. To use something like Star Wars as an example, using existing works as a template to create your own work is not plagiarism, or in Star Wars‘ case, even transformative. The fact that George Lucas used classical literature, especially the concept of hero’s journey combined with elements inspired by Kurosawa’s Hidden Fortress, to create something that was essentially new and needed in the later 1970’s speaks volumes on itself. Creativity feeds back on itself, just like any field feeds back to itself. It wouldn’t be incorrect to say that all creative fields derive from each other and from themselves, but that doesn’t keep anyone from to taking elements, rearrange them and give them new approaches to create something original. Sure, some resort to blatant ripping off, but that’s another issue.

Of course, it is well known that Shakespeare’s works are inspired by existing tales, but we don’t exactly celebrate the plots of his works. They are celebrated because Shakespeare’s works broke down existing boundaries both socially and in language. Hamlet‘s plot is not why it’s so highly regarded, but because Hamlet himself is so well written as a character and how Shakespeare conveys his growth and anguish through and through. Act III, Scene I of Hamlet is not great because To be or not to be has become recognized as almost universal anguish, but how the whole line bears Hamlet to the audience. There is no actor who would not want to tackle this famous line and breathe his own life into it.

We do not have reverence for Shakespeare’s works because of him; it’s the opposite.

The question whether or not we should separate the creator from his work is something we all should consider. I would argue that as often as possible we need to separate the work from its author simply because our view on the piece would be coloured and become biased if we have strong opinions on the creator. It is very easy to veer into identity politics if we have something against a creator, as it is the case with Dana Schutz’s Open Casket. The case shows how anyone can interpret a painting how they see fit and disregard the author’s intent. While we can debate which one is more important, we should always remind ourselves that freedom of expression is a supposed tent pole with art, and as such should be respected over personal views. Calling for her painting to be burned is very reminiscent of book burnings from various eras, e.g. German Nazi party’s book burnings. While we can argue obout the painting itself, no subject should be banned from anyone within the proper limits of law.

If we were to ban certain people from subjects to create works based on, the opposite should the true as well. Otherwise we’d be discriminating a group and favouring another. However, such limitation would kill the change of thoughts and ideas as well as the discussion between and in these groups. Creativity would stifle to a standstill when nobody is allowed to wonder outside their own region, creating a sort of echo chamber. No outside aspects would be brought in to give new and fresh ideas. Some would certainly welcome this sort of approach, as long as it would be aligned with their own views.

The world already has a history with this sort of approach, at least a one sided example. The Socialist Realism was practices in the nations of Soviet Union, which essentially prescribed a canon in art and other creative fields. While creative fields are not political by their core nature, politics can be applied to them. Socialist Realism was nothing short of political propaganda and its core intent can’t be separated from politics, but we can sideline it.  However, not before it fell from favour around the 1960’s, no other idea or thought was allowed; it governed the creators.

The Chinese communist party did even worse by almost erasing their old culture and destroyed much of the Chinese heritage. Jump here to read a bit more on that. It’s interesting to notice both of these are communist and marxist examples.

In order for discussion and exchange of ideas to move forwards, we need to allow the creation of things we may object and view them outside our own selves. Nothing good comes from silencing the one we disagree and push him underground, when we can lift him up to the stage of ideas and allow all to see and wage these ideas ourselves.

The will and skill to express oneself has been around longer than the written word. If we’re to value art as we like to see it, it’d be great of we could stop fucking around with it and let people show their stuff. If one is ready to censor or ban someone’s freedom of expression, he’d better be ready to face censorship himself.

 

Experience and digital space

Short answer; No. Long answer; It’s a bit more complicated than that. With digital media, the ontology is often concentrated on viewing the relationship between the consumer, the media and the culture of the media. The digital part is significant. While there are now few generations that have grown up in a world that never lacked the digital component, it is still relatively new introduction in historical scale. Nevertheless, it is present everywhere nowadays and digital elements in out life most likely will keep growing as the time goes by.

Timothy Druckery, a theorist of contemporary media, even went so far to argue that it would not be possible to describe or experience the world without technologically digital devices. He argues further that the evolution from mechanical to technological computer  culture has been more than just a series of new techniques and technological advances, that it is more about the evolution between dynamics of culture, interpretation and experience. Much like Druckery’s collegues, he argues that representative works are based on experience, and it would be hard to argue against that.

Video and computer games are based on experiences people have. First computer RPGs had their roots in Dungeons & Dragons campaigns people had, and this applies to origins of Ultima as well.  Miyamoto has stated that The Legend of Zelda his goal with the game was to have the game feel the same way as if you were exploring a city you have never been in before. You can almost see the overworld map as a city layout in this sense, where certain paths are alleys, larger open areas are parks and numerous dead-ends permiate the game. Or maybe that’s just me. Satoshi Tajiri, the name behind the Pokémon franchise, based the game on his own experience with bug catching. Japan has a history with kids having bug catching as a hobby, and the latest big craze was during the 1990’s. When you consider how a kid has to cover creeks, run over rivers and search the forests for new bugs to catch, you begin to see the adventure and the excitement that Tajiri wanted to convey in Pokémon. You also begin to see where modern Pokémon has started to veer off, emphasizing plot over adventure. There was a good article how Yu Suzuki put Virtua Fighter’s developer through martial arts training each morning in order for his men to animate a punch or a kick right.

That is not to say a game can be created without any experience in subject itself. Hideo Kojima has never been a spy or a soldier on a battlefield, but he nevertheless put his experience from Western movies into use in Metal Gear. You can see the change in certain visual in Metal Gear Solid 2  when they got an actual military advisor on the team. For example, Snake no longer pointed his gun upwards and overall how characters began to handle weapons changed. Small, but rather significant change when you consider how much Metal Gear games depend on the whole experienced soldier schtick.

Nevertheless, all the above mentioned games are representative of some sort of experience and allow the player to experience a sort of simulation of it. With any new sort of media there has been the fear of losing something important to humanity, if you will. With digital media the question of the consumer’s identity has become a question through the fears of how any new media might (or rather will) change our way of thinking and the way we live.

Without a doubt we have both real and virtual spaces as well as the identities that go with them. We have a wear a different persona when we are with our parents or friends, and the same applies to the virtual space. Since the 1990’s virtual space has become more and more daily thing to the point of Facebook and other social media becoming almost essential. However, even in these spaces we have a persona on us that is different from others. Much like how when writing this blog I have a persona on you don’t see in other virtual spaces, though it is overlapping harshly with everything nowadays. While there is no physical aspect to virtual spaces (they are digital and non-physical by definition) they nevertheless are real and can carry to the “real” world. However, we can always the space we choose to interact with, though this has led to the birth of extreme comfort zones where one must feel safe all the time rather than challenging oneself and broaden horizons. After all, nobody wants to get stuck in place for all eternity. Unless they get hit by a car and fall into three years of coma.

Whether or not digital media and virtual identities change our selves in physical form is a topic for a different post (it does, but the extent in which way is expansive), but I can’t but mention that experiences the consumers gain from digital media affects us just as any other similar source. After all, electronic games are an active medium instead of passive like movies or music and require the consumer to learn in order to advance. This has led some to argue that games promote violence through teaching violent methods.

Eric Harris and Dylan Klebold are the two names responsible of the Columbine Shooting in 1999, and two years later Linda Sanders, whom lost his husband in the shooting, sued 25 different companies, like Id Software, Apogee Software and Interplay Productions, claiming that the event would not have happened if games with extreme violence like this wouldn’t exist. It was argued that certain games allowed the two assailants to train their shooting skills with precision and affected the two in a negative way. However, as we’ve seen multiple times over, games do not cause kids to go violent, and it would seem to be far more about the individual and their mental health than the media they consume.

However, it must be said that even when games are escapism from real world, they still are a product of real experiences. Playing may be just a game much like any other, but the more real world expands into virtual spaces thematically and ideologically, the less there is separation between the two. Ultimately, playing a game will affect the real world persona of the player, thought he question how much is very much up to the individual consumer. Games have been discussing censorship, violence and current topics for more than thirty years now, and for a medium that is about escapism to a large extent, that does not bode well. How much value we can put on a digital world that does not make use of its non-real capabilities and ties itself to the real?

Perhaps the digital personae we use has become less important as the melding of two worlds continues, and the identity we assume is an amalgamation.

ICD-11 video game addiction is being pushed without proper backing

Without a doubt certain percentage of people who play electronic games overdo their hobby. However, this is only for a small percentage of the overall enthusiasts and hobbyists. Furthermore, it would seem that problematic gaming, that is the consumption of electronic gaming that is detrimental to everyday life, itself grows itself thin in time and dissipates on its own. A longitudinal study showed this with 112 adolescents. I’ve already covered why the proposal for gaming disorder has no basis, but it would appear pushing for its suggestions into ICD-11 has merit to it. Merit that wouldn’t serve science, culture, markets or consumers.

Ferguson wrote that less than 1% of people experience video game addiction. His writing is a good read. Game addiction in itself is a very different nature from e.g. gambling. I’ve actually covered issues with pairing electronic gaming and gambling with each other previously, but to make short story even shorter, video game addiction is far more often a symptom of an underlying problem than the cause in itself. Ferguson’s own study supports this. Hell, there’s even a paper arguing against the very concept of video game addiction.

In a discussion between Ferguson and an administrator at the World Health Organisation acknowledged political pressure from countries, particularly from Asian ones, factoring in the inclusion of video game addiction into ICD-11. If countries are pushing its inclusion, that means scientific basis comes second at best and whatever political stance these nations have come in first. That is extremely dangerous, as adding video game addiction opens doors for other far more intrusive and harmful suggestions to be included under its umbrella. Considering video game addiction is extremely loosely defined and would require far more research than what it has, there’s no guarantee any of the future additions would have better research behind it.

You may be asking yourself what nations would have need or use for this sort of addition to the ICD-11. Some nations have reported more deaths from non-stop gaming than others, and mostly we hear these reports from either China or South Korea. In 2005 a 28-years old man died because his heart failed during a session of Starcraft, BBC reports. It is interesting to note from that article that despite Starcraft being a real-time strategy game, professor Mark Griffith only talks about MMORPGs, a very different genre of game. You have far less interaction with your opponent in Starcraft that you have in e.g. World of Warcraft.

South Korea has seen drastic changes in its electronic game landscape, and one of the more worrisome changes came around 2014, when some members of the government began to regard games as a detrimental pastime. South Korea has discussed to enact game addition bill to limit not only the amount of time people should be allowed to play, but also games themselves. However, when you have legislators directly comparing video games to tobacco and alcohol, there is something amiss. South Korean gaming culture is far different from any other, e.g. you can actually graduate to be an e-Sports player. However, much like any other person who has a career in “sports,” e-Sports players suffer from injuries as well. Seeing how the South Korean culture has almost twisted games and e-Sports into a national pastime, it’s no wonder a lot of young people are willing to give a chance to become a player worth millions of wons.

The thing is, South Korea does have a problem with gaming, but rather as we are lacking in evidence for gaming addiction (we have more researches saying against it as linked above), it is far more probable that the South Korean gaming problem is a symptom from an underlying social and cultural troubles. Putting legislation that equates games with drugs and alcohol won’t cure the problem, it will manifest itself some other way later down the line.

Passing a law based on game addiction is hard when you have nothing to base it on. However, if ICD-11 would recognize video game addiction as a valid illness, there would be no need for debating or researching the issue much further; after all, you can simply point out that it’s in the books. That would be injustice.

One of the gaming limiting laws has already passed. The Shutdown law was passed in 2011 and limits people aged under 16 from playing online games during the night between 00:00 and 06:00. While this would sound decent in principle, it is not the government’s job to do what parents should be doing. Furthermore, this law challenged in few occasions as unconstitutional. However, the law is still in effect, albeit nowadays parents can request the ban being lifted from their child.

China’s following this South Korean example with similar legislation that would ban gaming outright from people aged under 18 between 00:00 and 08:00, and would necessitate computers and smartphones to be fitted software that would track down law breakers. Both South Korea and China require their people to use their real IDs when accessing their gaming accounts. In case of South Korea, this is a necessity with many of their websites in general. However, in 2012 Real Name Rule was struck down and rejected by court. The law requiring the usage of users’ real names was introduced in 2007 to combat cyber-bullying. Again, this is treating the symptom, not the cause. Furthermore, as gaming is a million-dollar business, by accusing game industry creating addictive products, governments could push forwards for harsher taxations and other underhanded shenanigans to gain more from the revenues. This may sound like a foil-hat idea, but seeing how few years back we found game journalism colluding and attacking their consumers and recently CIA spying everyone everywhere, this isn’t far fetched.

Games of any kind, be it sports, card games or anything else, are addictive in their own way. For modern electronic games, it’s a whole mess to open why they could be addictive outside the usual action-reward scheme. This is because electronic games have more dimensions than gambling. After all, games are a tool to give leeway for people from their everyday life in an electronic way that supports social interaction through cultural landscape and aims to both challenge and please the players at the same time. They are not gambling, except Complete Gacha in Japan, as gambling quite literally requires wagering money or something else valuable under uncertain conditions for higher gains. Of course, games are designed to pull the player in and be enjoyable, but that is what every form of entertainment does.

If video game addiction would have something to be tied to, it would be escapism. Escapism is always tied to something else than the tool people escape through, and the question I must ask here; what are people escaping from if they are willing to kill and die because of video games?