As HTML5 grows, security risks become a bigger issue
The W3C (World Wide Web Consortium), as of October 2014 officially approved HTML5 as a complete industry standard. But the adoption process started a long time ago. In 2010, Steve Jobs helped point the way when it announced that Apple would use HTML5 instead of Flash. Since then, HTML5 has grown considerably. Presently, around 30% of Fortune 500 companies already use HTML5. All other tech giants such as Facebook, Google, Microsoft and Netflix have already adopted HTML5.
HTML5 has a few things going for it, the first being cross-platform. VisionMobile’s latest survey to over 10,000 app developers found that 42% of mobile developers use HTML5 as a preferred platform. Mostly because designing apps in HTML5 makes them instantly simple to port to multiple platforms, such as PC, Mac, iPad, Android or delivered as web services, or SaaS applications on the cloud.
"An HTML5-based app is no different from a web-based application and the same security measures should apply to both," Bogdan Botezatu, senior e-threat analyst for Bitdefender, said. This is especially important because cyber attacks that can now walk right through your digital front door might surely jeopardise operations, compromise customer data, personal privacy, or even matters of national security when simple and fast obfuscation and tamper-protection technology exist in the marketplace.
So why take the chance? Clearly there are expectations from management that what they invested in and paid for can and should be delivered to them not only bug-free and ready for production, but also be delivered in a final wrapper that would make it more challenging and difficult for others to copy, steal or tamper with, if not compromise your whole business via web-fraud, malware injection, data leakage or other such nefarious cyber attacks.
When the code is stored both on the client and server as ‘in the clear’ text files, the code is hosted on a shared server that others could easily gain access. Hence developers can easily lose control over who’s accessing the original source code – unless it’s obfuscated or more robustly protected once it’s released or signed off on as ready for production.
You could ask yourself if going native wouldn’t solve this issue altogether. The code of native applications is first compiled and then deployed to mobile devices. Thus, it’s naturally less exposed, right? Actually, this is a common misconception. Depending on the hacker goal, native code can be trivial to crack too. For instance, Android applications are developed in Java, which is quite easy to decompile from bytecode to something similar to the original source code. So even native code can benefit from obfuscation, prior to compilation. Being a compiled technology should not weight in your decision of going native or HTML5.
- » Linux Foundation and LISH publish latest open-source census with suggestions to boost security
- » COBOL still going strong with enterprises favouring modernisation over retirement, report finds
- » Safari soon won’t accept HTTPS certificates longer than 13 months
- » SoundCloud repairs API-related security snafus after Checkmarx research