Are you nervous about the Nintendo Switch 2? Don't be
Reinventing the wheel is over.
Murmurings about Nintendo developing a successor to the Switch has quite a few people grumbling – the original has been barely out for a year and there are still several things, namely an online ecosystem, virtual console and multiplayer, that are still muddy. It’s understandable too – traditionally, new consoles from Nintendo use a completely different development environment than their predecessors, forcing developers to split their resources and markets, and forcing punters to drop another $500+ on new hardware. But things have changed drastically in the last five years, and the likeliness of an overhaul that excludes the others is very unlikely.
In the early days of the console, namely pre-internet, pre-digital download, developers worked inside a silo. Software and hardware were the proprietary gatekeepers – they locked developers financially into working with a single manufacturer (Nintendo or Sega etc), which forced exclusive titles and allowed developers to commit to a single pathway to building new IP and investing in hardware. These development environments were all built largely on custom RISC chipsets that were designed and built specifically for the manufacturer, which prevented piracy (by restricting development kits) and created an edge that wasn’t easily duplicated by their competitors.
The problem with this was that each new environment involved a new, custom architecture that had to be “learned” – this involved months if not years of working with the hardware to unlock its potential. This is largely why early games in the life of most new consoles tend to be average when it comes to performance, with the most impressive (See: The Last of US on PS3, Super Mario RPG on SNES) hitting near the end. Console makers also imposed (and still do, really) very excessive Q&A processes that are mainly due to the quirks of their custom environments. It’s expensive, time consuming and risky for many smaller studios to commit to this process, which is why indie games were rare until recently.
The sheer cost in building custom RISC chips tends to take longer to recoup. It’s because of this that the current slate of consoles – PS4/XB1/Switch – all use hardware that is commonly available. The PS4 and XB1 run on traditional PC guts – semi-custom X64 processors with on-die GPUs – which means quicker and easier porting between each other and PCs, as there are fewer changes to make. The Switch, meanwhile, runs on the same hardware that powers phones and tablets – namely a processor that already existed inside a popular Android TV set top box.
This transition was not an accident – while Microsoft was already moving in this direction, Sony was desperate to reduce build costs for the PS4 after huge launch resistance due to cost that crippled the PS3’s growth until late in its life. Its Cell processor, while insanely powerful, cost billions to design (alongside the BR drive) and was apparently extraordinarily difficult to code for. While the Xbox 360 and Wii were also built on similar Power PC systems, both were much cheaper and easier to code for thanks to strong developer side support and lower dev kit costs (which were waived for many independent studios).
The other big thing that has changed in the last few years have been engines – developers once had to build game engines for each game (or at least, the first game) they made. Nowadays – there’s engines like Unity, Unreal Engine, Lumberyard and CryEngine – all of which can create ports for one or many platforms. So, there is little reason or need to differentiate the guts when the focus is now on software and services. Nintendo and Sony even partner with Unity to extend a helping hand to smaller developers to expand their digital libraries.
It’s based on this new, PC-esque paradigm that the PS4 Pro and Xbox One X exist – more powerful upgrades to the original environments. They are backwards compatible, but new software runs better on the upgraded console. This is also how Microsoft published titles co-released and run on Windows at the same time as Xbox – they are using the same development environment. I would bet dollars to donuts that the next Xbox will iterate in the same way an iPhone does – with the same software.
Sign up to the 12DOVE Newsletter
Weekly digests, tales from the communities you love, and more
Therefore, you shouldn’t be concerned about the Switch 2. Nintendo’s move to ARM processors means it can outsource the core, stabilise its interface and gradually improve and refine it over the next decade. It has broadened its eShop with hundreds of indie titles because it is attempting to grow a wider library that will be accessible for the long term – operating itself as a service and software publisher like Steam, taking a cut for distribution over its clever hardware design. Its consolidation of portable/desktop also means it can focus attention onto a single platform rather than multiple – making it easier for itself and publishers to plan for the long haul.
Sure, older variations of hardware will eventually find themselves out of the loop, but this will be a long way away – think how smartphones and tablets find themselves updated with UI and software support over the years. The Switch 2 will be just that, an iterative update in terms of power, size and display. Maybe it drops the physical media. But it will still run on the Linux based environment that the Switch does.