Tag Archives: IoT Design

Hannover Messe: Wibu-Systems and its Partners Demonstrate Cybersecurity in Industry 4.0 Applications


Wibu-Systems applies cybersecurity to Industry 4.0 infrastructures at the Hannover Messe.

Wibu-Systems applies cybersecurity to Industry 4.0 infrastructures at the Hannover Messe.



Wibu-Systems has organized a comprehensive demonstration program for the upcoming edition of the Hannover Messe, April 24-28, 2017, in Hannover, Germany. In its exhibition space in hall 8, booth D05, and alongside dozens of other IIoT trailblazers, the company will be showing how security ecosystems and applied security frameworks can prevent the cyber-attacks that connected systems will inevitably have to face.

The demonstration starts at Wibu-Systems’ booth, where an embroidery machine from our long-standing customer ZSK Stickmaschinen will be protected with CodeMeter, the all-in-one technology for protection against counterfeiting, reverse engineering, and tampering. As the recent study on product piracy conducted by the German Engineering Federation VDMA illustrates, attacks to the know-how of products, machines and entire manufacturing facilities are becoming rampant as Industry 4.0 takes off. CodeMeter encrypts source code, machine code, and sensitive configuration data and stores the digital keys in a preferably hardware-based secure element, namely a dongle in the form factor of a USB stick, a memory card, or an ASIC. The protection capabilities complement license lifecycle management to also boost revenues by enabling versatile sales strategies that meet all customers’ demands with modular licensing and pricing.

The embroidery machine will provide a syslog (security incident information) to the SIEM (security information and event management) system of a broader demonstration set up by the Industrial Internet Consortium (IIC) and the Plattform Industrie 4.0. In their collaborative effort to outline common guidelines between RAMI and the Industrial Internet Security Framework, the two organizations have gathered 24 companies that have been working closely for the last five months to provide an interoperable security concept. The core of the demonstration can be found at booths C24 and D24 in hall 8, but the demonstration expands into other booths and halls throughout the fair, to other parts of Germany, and even other parts of the world. The goal is to demonstrate that endpoint security is achievable with current technologies even in heterogeneous architectures that mimic a real-world distributed multi-vendor constellation.

Wibu-Systems is also cooperating with SmartFactoryKL, a technology initiative promoted by the DFKI, the German Research Center for Artificial Intelligence. The cross-site demonstration on display in hall 8, booth D20, is joined by 18 pioneering companies, each contributing its specific expertise for the dawn of a new industrial age. CodeMeter by Wibu-Systems provides the technology required to achieve comprehensive and strong IT security standards in Industry 4.0 production facilities. It protects software components against unauthorized access and manipulation. Cryptographic keys, e.g. used for authentication in OPC UA, are stored and used securely on CmDongles. CodeMeter’s license management capabilities allow granular controls over software features or the configuration or confidential production data.

IUNO, the German Reference Project for Cybersecurity in Industrie 4.0, has also made remarkable progress. In the last year, all four application areas of customized production, technology data marketplaces, remote maintenance, and visual security control centers have made great strides towards ultimately offering SMEs a security blueprint. In hall 2, booth B22, we will be presenting the current results of our work to establish cross-domain trust for the manufacturers of networked products. To illustrate the concept of the license-based economy, we will use CodeMeter License Central, Wibu-Systems’ web-based entitlement platform, to associate licenses with beverages: Drinks are served and billed based on the customer’s order. In a separate project, we will explore endpoint security by design: CodeMeter Runtime is used for the remote maintenance of clients, and CodeMeter Embedded for the remote maintenance of machines.

Oliver Winzenried, CEO and founder of Wibu-Systems, is certain that the technology has reached maturity: “Industrie 4.0 started with a vision to optimize manufacturing processes in a sustainable way that would safeguard resources, safety, and the environment. As technologies were developed to embrace this ideal, we came across security and business challenges: We had to reinvent not just the nuts and bolts of production, but also keep intellectual property secure and provide new go-to-market approaches. Together with other Industrie 4.0 frontrunners, we have created multi-platform, standardized, and robust cybersecurity measures that all intelligent device manufacturers can implement right now.”

via IOT Design

Embedded developers need an open source over-the-air software updater without the lock-in

The topic of OTA software updates for embedded devices is gaining attention as embedded systems are increasingly being connected. Highly publicized breaches continue to demonstrate the lack of security in the design of embedded systems. The software update process itself is intricate with many security considerations and it plays an urgent role in the security of the Internet of Things.

A good example is the Jeep Cherokee hack in July 2015. A couple components that allowed two security researchers to hack into the vehicle remotely were due to not having a secure over-the-air (OTA) solution. One of the first steps enabling this breach was a software vulnerability that was left unpatched in the multimedia system, which ran a Linux-based operating system. They exploited that vulnerability, which lead the security researchers to the V850 controller. The V850 controller software was built to only listen to the CAN bus and not write commands to it. However, it is still a computer system and all they had to do was reprogram it with a malicious firmware update, which they were able to successfully deploy due to a lack of proper authenticity checks. This allowed them to write commands directly to the CAN bus, which allowed them to control the engine, the steering wheel, braking system, and every other critical system.

A properly functioning OTA update mechanism requires authenticity checks, which would provide another layer of security as a deterrent to malicious hackers. An OTA solution should also have a secure channel (e.g. HTTPS) to deploy patches. Kenna Security states the probability of a vulnerability being exploited is less than 10 percent if it is patched within 5 to 10 days after the discovery. However, there is more than a 90 percent chance once it has been 60 days since the discovery. Unfortunately, there is an average remediation time of 110 days today which explains the increasing velocity of security breaches.

We created Mender as an open source project to address the update process in order to timely patch vulnerabilities, deploy bug fixes, and enable new features for their customers. Our journey started three years ago when we began conducting user tests with the amount of experience already in the market. Many participants have dealt with OTA for years, some for more than a decade when it was better known as cyber-physical systems (CPS) or machine-to-machine (M2M). We found most embedded teams had one to two people dedicated to building and maintaining a homegrown tool to manage the update process. And with time-to-market pressure, it’s not surprising they lacked the bandwidth to ensure all the security features were included to ensure a secure and robust update process. We hope that will no longer be the case.

Ralph Nguyen is Head of Community Development at Mender. He brings a decade of experience providing solutions to organizations with complex challenges in software development and systems management. Nguyen has held various senior roles at leading developer-centric software companies such as Sonatype and Electric Cloud and holds a B.A. in Modern Literature from the University of California Santa Cruz.

via IOT Design

Flexibility to update firmware, a key to IoT devices

 

Internet of Things (IoT) devices are being introduced into the market at a rapid pace – from home appliances to medical devices to cars – as manufacturers must stay ahead of their competitors with new innovations and the flexibility to adopt or integrate new technologies. Designers must build flexibility into their products to meet the evolving IoT ecosystem as new functionalities and regulations are adopted. Firmware updates not only allow customization during initial deployment at a customer site, but also enable new functions/features to be added after a product has been in the field or allows fixing of any firmware issues during usage.

Non-volatile memory (NVM) devices such as NOR flash are commonly used as a firmware code storage medium due to their reprogrammability and reliability. By rewriting a portion of the device firmware code residing in the NVM used in the device, manufacturers can easily update device capabilities.

When looking to update firmware there are three things to consider:

  • What/how much code to update
  • How often to update
  • The time it will take (speed) to perform the update

What/how much firmware code to update

What and how much firmware code to update must be considered during the initial design phase of the IoT device. The updatable portion of the firmware must be stored in separate area of the NOR flash device than the non-updatable portion.

Updating any piece of NOR flash starts with first erasing that portion of the memory and then programming new information into that portion. NOR flash is organized in portions of different sizes called sectors and blocks. NOR flash devices, such as SST’s 64 Mb SuperFlash SST26VF064B technology are organized in uniform 4 KB sectors (4 KB = 4 * 1024 * 8bits = 32,762 bits) that can individually be erased and reprogrammed. They can also be organized in larger 8 KB, 32 KB, and 64 KB blocks that can also be individually erased. Thus, one 8 KB block has two sectors, one 32 KB block has eight sectors, and one 64 KB block has 16 sectors. Figure 1 shows the memory organization of the SST26VF064B in 8/32/64 KB blocks, each of which can be individually protected.


[Figure 1 | Memory organization (map) of the SST26VF064B, which consists of eight 8 KB blocks, two 32 KB blocks, and 126 64 KB blocks. Click to zoom.]

Prior to performing any update to any portion of the flash, blocks in that portion must be unprotected to allow for erasing and programming. After completing an update it is prudent to again protect those blocks to prevent any inadvertent writing or erasing of those areas.

The updatable portion of the firmware must be organized in sectors and blocks in such a manner that there is enough flexibility to allow both limited and maximum feature/function updates. Since the speed of an update is determined by the number of sectors and blocks that need to be erased and re-programmed, it is better to think of speed and flexibility together when organizing the updatable portion of the firmware. Figure 2 shows an example of organizing the memory into updatable and non-updatable portions. Non-updatable portions (such as boot code) are stored in protected regions, while updatable portions of firmware (such as features/functions) are divided into smaller or larger blocks based on flexibility requirements. Updatable image files are stored in larger blocks and updatable variables/parameters are stored in smaller blocks.

[Figure 2 | Organizing memory in non-updatable portions (such as boot code) and updatable portions (such as code for functions/features, image files, and parameter variables). Click to zoom.]

How often to update

The main limitation to how often you may want to update firmware is the endurance limitation of the memory used in an application. SuperFlash technology memory such as the SST26VF064B has 100,000 endurance cycles, which means that each sector can be programmed and erased 100,000 times. The possibility of updating firmware 100,000 times sounds like plenty; however, many IoT devices collect data and store the information in NOR flash during operation so this must be considered when calculating maximum endurance cycle limitations.

It is also important to allocate sufficient sectors in the memory to account for endurance. For example:

Suppose the IoT device is collecting and storing 16 bytes of information and the information is expected to be collected and stored 100 million times during the life of the product. The number of sectors that should be allocated can be calculated as follows:

1 sector = 4 KB

Assume all the address locations in the sector are used to store information, 16 bytes of data at a time, and are written to a new address location until the end of the sector is reached (e.g., 0×0000-0x000F then 0×0010-0x001F then 0×0020-0x002F, etc.).

Since 4 KB/16 bytes = 256, that is the number of times storage can be written before reaching capacity in the sector and erasing any data in the sector. If the endurance limit of one sector is 100,000 cycles, and one sector can be written 256 times for 100,000 cycles, data can be collected and stored 25,600,000 times.

If an application requires data be collected and stored 100 million times, the number of sectors to allocate is calculated as 100,000,000/25,600,000 = 3.9. Therefore, in this example, it is necessary to allocate 4 sectors to store 16 bytes of data for the life of the application.

IoT device engineers need to do similar calculations to allocate sufficient sectors and blocks for data logging parameters so as not to breach the endurance limit of their NOR flash device.

Speed of updates

The speed of an update can be calculated based on the number of blocks and sectors that need to be erased and reprogrammed. Suppose it is necessary to reprogram 1 Mb, 2 Mb, or 4 Mb of firmware code/data stored in several 64 KB blocks in an SST26VF064B. The code/data can be comprised of firmware code, image files, or other code that needs to be updated. Doing the update involves performing a sequence of command instructions to flash. The sequence would start with unprotecting the blocks of memory, erasing those blocks, programming those blocks with updated data/code, and re-protecting those blocks of memory.

For the SST26VF064B, the required sequence of instructions for updating 1 Mb, 2 Mb, or 4 Mb of memory is shown in Table 1. From Table 1 it is evident that the two most significant periods are erase time and program time.


[Table 1 | Sequence of flash command instructions to update 1 Mb, 2 Mb, or 4 Mb of memory. Click to zoom.]

The SST26VF064B uses SuperFlash technology, which provides excellent erase performance. A comparison of erase and program performance for SuperFlash technology versus conventional flash is shown in Table 2. The vastly superior erase performance provided by SuperFlash technology as compared to conventional flash is immensely useful for reducing update time. The SST26VF064B supports a maximum clock frequency of 104 MHz, maximum sector erase time of 25 ms, maximum block erase time of 25 ms, and maximum page program time of 1.5 ms. A 12 ns delay (CE high time) is also required between each command instruction to flash memory operating at a clock frequency of 104 MHz.


[Table 2 | Program and erase times for the SST26VF064B and conventional flash. Click to zoom.]

Using the sequence of commands shown in Table 1, along with knowledge of the program and erase times, the calculation for the amount of time required to update 1 Mb, 2 Mb, or 4 Mb of SuperFlash technology memory and conventional flash memory is shown in Tables 3 and 4, respectively. Such calculations must be done by IoT device engineers to estimate the speed of doing updates with the aim of minimizing the IoT device downtime during updates.


[Table 3 | Amount of time required to update 1 Mb, 2 Mb, or 4 Mb of SuperFlash technology memory. Click to zoom.]


[Table 4 | Amount of time required to update 1 Mb, 2 Mb, or 4 Mb of conventional flash memory. Click to zoom.]

Conclusion

IoT device design engineers need to provide the flexibility to update application code and data. What and how much code to update, how often to update, and the speed of updates are issues that need to be addressed while designing an IoT device. The selection of NVM impacts these issues and plays a critical role in calculating the timing and speed of code updates.

Hardik Patel is a Principal Applications Engineer in the memory division of Microchip Technology, Inc. He previously worked at Silicon Storage Technology and Micrel as a Senior Applications Engineer. He holds a Bachelor’s of Applied Science from University of Toronto in Electrical Engineering.

Microchip Technology, Inc.

@MicrochipTech

LinkedIn: http://ift.tt/2pdEkFa

Facebook: http://ift.tt/28ZSC1x

Google+: http://ift.tt/2oeD7ZO

YouTube: http://www.youtube.com/user/MicrochipTechnology

 

via IOT Design

How to make IoT profitable and secure: Building blocks inside of sandboxes

 

From a business perspective, much of Internet of Things (IoT) development isn’t new. Wireless and even mobile Internet connections are mature technologies, and embedded processors go back further still. For industrial markets, the “Industrial IoT” has replaced “SCADA” (supervisory control and data acquisition) as the control system architecture using computers, networked data communications, and process supervisory management to control peripheral devices (the ‘things’ of IoT) in large plants. So what’s new?

What’s new is the combination, and the novel challenges that come with it. Namely, supporting all the services and business models on which companies have come to rely when running on stiflingly designed to cost and/or low-power connected systems.

It’s challenging, time-consuming, and costly enough to make these connected devices in the first place. But that’s only half the story. Like all software products, smart devices need post-launch support and features if they are to stay competitive. Furthermore, any business model that goes beyond the initial product sale — selling apps, streaming content, pay per use, etc. — will require regular updates. And what if that business model changes?

In opposition to the rest of the digital world (PCs, smartphones, servers, etc.), most low-power embedded systems usually don’t run an application platform. This means the product’s software isn’t nicely architected with clear boundaries and interfaces (APIs or ABIs) to separate concerns. Adding/fixing a feature means updating the whole product, with all the extra (exponential) debugging and validation that entails.

It’s not only irritating to the marketing team that never gets their wishes on-time or on-budget, but it is also annoying to the end user who needs to download a large firmware updates and restart their device for it to take effect (often resetting their configurations in the process). To top it all off, embedded programmers aren’t cheap, owing to the difficulty and wide range of expertise needed for the job.

But a slight change in scenery can fix all these problems at once. Specifically, the binary building blocks of IoT should be developed and deployed in individual sandbox environments, all on top of a virtualization layer running semantically managed code. With minimal overhead, embedded binary building blocks (often called apps) sandboxed this way can be placed together to build the device much like one would build a Lego set — a comparison that remains apt post-launch as well, as the device can be later modified as quickly and freely as a mobile app, giving the business that made it more flexibility and fewer expenses.

It’s simple to deploy to a sandbox

This technique is a far cry from how it’s used in DevOps, where sandboxing is the most common. Same goes for its use cases and rationales. But the purpose is much the same. As mentioned, it’s the lack of isolation (cpu, memories, peripherals) in embedded systems that make it so complicated to update. By removing dependencies on both the whole program and its specific toolchain, this combo of virtualization and sandboxing lets developers break a program into smaller binary building blocks and only work on the binary building blocks relevant to the new feature or update.

Consider the ease of updating an app for mobile platforms, where this isolation is already built into the environment. If, say, the creator of Angry Birds wants to add a new bird to the game, she only has to touch the code related to the app itself – no need to access the Android kernel (or even understand it for that matter). To deploy it, she only has to pass a quick round of validation and push the app to the shelf of the targeted ecosystem (here, an application store on the cloud). From there, devices can be updated seamlessly, probably as a background process.

Nothing about updating/upgrading/fixing embedded firmware is that easy, but it’s the experience developers can have when each binary building block is sandboxed. Like their namesake, everything becomes simple and incremental: updates, tuning, debugging, add-ons, etc.

Simplicity for the developers and end users isn’t the only benefit to sandboxing in this way. As you’re able to do partial software updates on embedded systems, it also saves on bandwidth at deployment time. In addition, sandboxing lets you set strong permissions for a given binary building block, saying when binary building blocks can and cannot access other binary building blocks, as well as their access to resources like CPU, RAM, or hardware peripherals.

How to build sandboxes for software building blocks

The virtualization layer beneath the sandboxes is key. It not only enables the sandboxing on small processors that don’t provide memory management units (MMUs), but it is already a step toward the goal of reducing post-launch expenses. A virtualized Java environment, for example, can take advantage of more modern code practices for simplicity, reusability, and abstraction, and allows a development team to be comprised of programmers that are in far greater supply. It also adds to the security of the device for many of the same reasons as sandboxing does.

A natural push back might be that a virtualized environment and a sandbox layer would add costly overhead, especially for very small systems like wearables or battery-operated sensors. However, software on the market today can handle these environments with mere kilobytes of flash footprint and very little extra RAM. A typical sandboxing layer, built on top of a virtualization layer, together weigh less than 45 Kb.

So where do the dollars come in?

Three places. First, as soon you deal with sandboxing and virtualization, you enter into an economical industrial process where you reuse the binary building blocks created that have now been made available on your company’s “shelf.” This is in fact one of the biggest assets a company can have. And because virtualization always comes with simulation – the binary building blocks can run on a virtual device as if they were running on a real device – it greatly lowers a project’s financial risk as it’s easier to validate a specification upfront. The gains are well known: more market share, more margin, and more speed.

Second, the combination of virtualization and sandboxing cuts down on money spent keeping the connected device up-to-date and competitive. As stated earlier, when updating/fixing your device is as simple as updating an app, you bypass the time-intensive process of modifying, testing, and validating the entire firmware. The gains apply on R&D and maintenance costs.

Third, it opens entirely new revenue streams post-launch. Being updatable, a device can integrate an ecosystem where “pay per use” of new services is the rule. Your marketing teams can easily test new ideas by adjusting the product to customers’ usages while charging a fee each month; you could sell subscription services, giving access to web-based services like monitoring, alarms, or streamed/downloadable content; or you could tie it into a larger business model – one of our own partners, a subsidiary of a utilities company, used this approach for a device that helps owners manage their gas bill. The gains are new revenues streams.

Best yet, you can change your mind. When designing/updating a smart device is as big of a commitment as it is now, a new competitor or sudden change in consumer demand means cutting deep into your margins to keep up. Post-launch flexibility via sandboxing and virtualization allow you to be as agile as you can be with software products running on much larger devices.

Profitability is the only barrier left on IoT’s path to ubiquity. If I had to guess, I’d say that 80 percent of smart device companies around right now will fail. Not from poor technical knowledge, but by bleeding to death on tiny margins. The current market will only pay so much for a smart device, which is only barely above the (current) average cost of development.

Forcing the market to bear higher prices would take either a monopoly or a miracle. But as we saw, every other path to profitability just takes software. The sandbox/virtualization combo is all about being fast; low cost (by reusing binary building blocks); low risk; agile to the market’s evolution; and optionally allowing new, recurring revenues. It’s a simple software solution for a complex economic problem with no real downsides. The sandbox/virtualization combo is the final “block” needed to turn IoT devices into profitable businesses.

Fred Rivard is CEO of MicroEJ.

MicroEJ

www.microej.com

@microej

LinkedIn: http://ift.tt/2oethap

Google+: http://ift.tt/2oet0nW

YouTube: http://www.youtube.com/user/IS2Tsa

 

via IOT Design

How to make IoT profitable and secure: Building blocks inside of sandboxes

 

From a business perspective, much of Internet of Things (IoT) development isn’t new. Wireless and even mobile Internet connections are mature technologies, and embedded processors go back further still. For industrial markets, the “Industrial IoT” has replaced “SCADA” (supervisory control and data acquisition) as the control system architecture using computers, networked data communications, and process supervisory management to control peripheral devices (the ‘things’ of IoT) in large plants. So what’s new?

What’s new is the combination, and the novel challenges that come with it. Namely, supporting all the services and business models on which companies have come to rely when running on stiflingly designed to cost and/or low-power connected systems.

It’s challenging, time-consuming, and costly enough to make these connected devices in the first place. But that’s only half the story. Like all software products, smart devices need post-launch support and features if they are to stay competitive. Furthermore, any business model that goes beyond the initial product sale — selling apps, streaming content, pay per use, etc. — will require regular updates. And what if that business model changes?

In opposition to the rest of the digital world (PCs, smartphones, servers, etc.), most low-power embedded systems usually don’t run an application platform. This means the product’s software isn’t nicely architected with clear boundaries and interfaces (APIs or ABIs) to separate concerns. Adding/fixing a feature means updating the whole product, with all the extra (exponential) debugging and validation that entails.

It’s not only irritating to the marketing team that never gets their wishes on-time or on-budget, but it is also annoying to the end user who needs to download a large firmware updates and restart their device for it to take effect (often resetting their configurations in the process). To top it all off, embedded programmers aren’t cheap, owing to the difficulty and wide range of expertise needed for the job.

But a slight change in scenery can fix all these problems at once. Specifically, the binary building blocks of IoT should be developed and deployed in individual sandbox environments, all on top of a virtualization layer running semantically managed code. With minimal overhead, embedded binary building blocks (often called apps) sandboxed this way can be placed together to build the device much like one would build a Lego set — a comparison that remains apt post-launch as well, as the device can be later modified as quickly and freely as a mobile app, giving the business that made it more flexibility and fewer expenses.

It’s simple to deploy to a sandbox

This technique is a far cry from how it’s used in DevOps, where sandboxing is the most common. Same goes for its use cases and rationales. But the purpose is much the same. As mentioned, it’s the lack of isolation (cpu, memories, peripherals) in embedded systems that make it so complicated to update. By removing dependencies on both the whole program and its specific toolchain, this combo of virtualization and sandboxing lets developers break a program into smaller binary building blocks and only work on the binary building blocks relevant to the new feature or update.

Consider the ease of updating an app for mobile platforms, where this isolation is already built into the environment. If, say, the creator of Angry Birds wants to add a new bird to the game, she only has to touch the code related to the app itself – no need to access the Android kernel (or even understand it for that matter). To deploy it, she only has to pass a quick round of validation and push the app to the shelf of the targeted ecosystem (here, an application store on the cloud). From there, devices can be updated seamlessly, probably as a background process.

Nothing about updating/upgrading/fixing embedded firmware is that easy, but it’s the experience developers can have when each binary building block is sandboxed. Like their namesake, everything becomes simple and incremental: updates, tuning, debugging, add-ons, etc.

Simplicity for the developers and end users isn’t the only benefit to sandboxing in this way. As you’re able to do partial software updates on embedded systems, it also saves on bandwidth at deployment time. In addition, sandboxing lets you set strong permissions for a given binary building block, saying when binary building blocks can and cannot access other binary building blocks, as well as their access to resources like CPU, RAM, or hardware peripherals.

How to build sandboxes for software building blocks

The virtualization layer beneath the sandboxes is key. It not only enables the sandboxing on small processors that don’t provide memory management units (MMUs), but it is already a step toward the goal of reducing post-launch expenses. A virtualized Java environment, for example, can take advantage of more modern code practices for simplicity, reusability, and abstraction, and allows a development team to be comprised of programmers that are in far greater supply. It also adds to the security of the device for many of the same reasons as sandboxing does.

A natural push back might be that a virtualized environment and a sandbox layer would add costly overhead, especially for very small systems like wearables or battery-operated sensors. However, software on the market today can handle these environments with mere kilobytes of flash footprint and very little extra RAM. A typical sandboxing layer, built on top of a virtualization layer, together weigh less than 45 Kb.

So where do the dollars come in?

Three places. First, as soon you deal with sandboxing and virtualization, you enter into an economical industrial process where you reuse the binary building blocks created that have now been made available on your company’s “shelf.” This is in fact one of the biggest assets a company can have. And because virtualization always comes with simulation – the binary building blocks can run on a virtual device as if they were running on a real device – it greatly lowers a project’s financial risk as it’s easier to validate a specification upfront. The gains are well known: more market share, more margin, and more speed.

Second, the combination of virtualization and sandboxing cuts down on money spent keeping the connected device up-to-date and competitive. As stated earlier, when updating/fixing your device is as simple as updating an app, you bypass the time-intensive process of modifying, testing, and validating the entire firmware. The gains apply on R&D and maintenance costs.

Third, it opens entirely new revenue streams post-launch. Being updatable, a device can integrate an ecosystem where “pay per use” of new services is the rule. Your marketing teams can easily test new ideas by adjusting the product to customers’ usages while charging a fee each month; you could sell subscription services, giving access to web-based services like monitoring, alarms, or streamed/downloadable content; or you could tie it into a larger business model – one of our own partners, a subsidiary of a utilities company, used this approach for a device that helps owners manage their gas bill. The gains are new revenues streams.

Best yet, you can change your mind. When designing/updating a smart device is as big of a commitment as it is now, a new competitor or sudden change in consumer demand means cutting deep into your margins to keep up. Post-launch flexibility via sandboxing and virtualization allow you to be as agile as you can be with software products running on much larger devices.

Profitability is the only barrier left on IoT’s path to ubiquity. If I had to guess, I’d say that 80 percent of smart device companies around right now will fail. Not from poor technical knowledge, but by bleeding to death on tiny margins. The current market will only pay so much for a smart device, which is only barely above the (current) average cost of development.

Forcing the market to bear higher prices would take either a monopoly or a miracle. But as we saw, every other path to profitability just takes software. The sandbox/virtualization combo is all about being fast; low cost (by reusing binary building blocks); low risk; agile to the market’s evolution; and optionally allowing new, recurring revenues. It’s a simple software solution for a complex economic problem with no real downsides. The sandbox/virtualization combo is the final “block” needed to turn IoT devices into profitable businesses.

Fred Rivard is CEO of MicroEJ.

MicroEJ

www.microej.com

@microej

LinkedIn: http://ift.tt/2oethap

Google+: http://ift.tt/2oet0nW

YouTube: http://www.youtube.com/user/IS2Tsa

 

via IOT Design

How to make IoT profitable and secure: Building blocks inside of sandboxes

 

From a business perspective, much of Internet of Things (IoT) development isn’t new. Wireless and even mobile Internet connections are mature technologies, and embedded processors go back further still. For industrial markets, the “Industrial IoT” has replaced “SCADA” (supervisory control and data acquisition) as the control system architecture using computers, networked data communications, and process supervisory management to control peripheral devices (the ‘things’ of IoT) in large plants. So what’s new?

What’s new is the combination, and the novel challenges that come with it. Namely, supporting all the services and business models on which companies have come to rely when running on stiflingly designed to cost and/or low-power connected systems.

It’s challenging, time-consuming, and costly enough to make these connected devices in the first place. But that’s only half the story. Like all software products, smart devices need post-launch support and features if they are to stay competitive. Furthermore, any business model that goes beyond the initial product sale — selling apps, streaming content, pay per use, etc. — will require regular updates. And what if that business model changes?

In opposition to the rest of the digital world (PCs, smartphones, servers, etc.), most low-power embedded systems usually don’t run an application platform. This means the product’s software isn’t nicely architected with clear boundaries and interfaces (APIs or ABIs) to separate concerns. Adding/fixing a feature means updating the whole product, with all the extra (exponential) debugging and validation that entails.

It’s not only irritating to the marketing team that never gets their wishes on-time or on-budget, but it is also annoying to the end user who needs to download a large firmware updates and restart their device for it to take effect (often resetting their configurations in the process). To top it all off, embedded programmers aren’t cheap, owing to the difficulty and wide range of expertise needed for the job.

But a slight change in scenery can fix all these problems at once. Specifically, the binary building blocks of IoT should be developed and deployed in individual sandbox environments, all on top of a virtualization layer running semantically managed code. With minimal overhead, embedded binary building blocks (often called apps) sandboxed this way can be placed together to build the device much like one would build a Lego set — a comparison that remains apt post-launch as well, as the device can be later modified as quickly and freely as a mobile app, giving the business that made it more flexibility and fewer expenses.

It’s simple to deploy to a sandbox

This technique is a far cry from how it’s used in DevOps, where sandboxing is the most common. Same goes for its use cases and rationales. But the purpose is much the same. As mentioned, it’s the lack of isolation (cpu, memories, peripherals) in embedded systems that make it so complicated to update. By removing dependencies on both the whole program and its specific toolchain, this combo of virtualization and sandboxing lets developers break a program into smaller binary building blocks and only work on the binary building blocks relevant to the new feature or update.

Consider the ease of updating an app for mobile platforms, where this isolation is already built into the environment. If, say, the creator of Angry Birds wants to add a new bird to the game, she only has to touch the code related to the app itself – no need to access the Android kernel (or even understand it for that matter). To deploy it, she only has to pass a quick round of validation and push the app to the shelf of the targeted ecosystem (here, an application store on the cloud). From there, devices can be updated seamlessly, probably as a background process.

Nothing about updating/upgrading/fixing embedded firmware is that easy, but it’s the experience developers can have when each binary building block is sandboxed. Like their namesake, everything becomes simple and incremental: updates, tuning, debugging, add-ons, etc.

Simplicity for the developers and end users isn’t the only benefit to sandboxing in this way. As you’re able to do partial software updates on embedded systems, it also saves on bandwidth at deployment time. In addition, sandboxing lets you set strong permissions for a given binary building block, saying when binary building blocks can and cannot access other binary building blocks, as well as their access to resources like CPU, RAM, or hardware peripherals.

How to build sandboxes for software building blocks

The virtualization layer beneath the sandboxes is key. It not only enables the sandboxing on small processors that don’t provide memory management units (MMUs), but it is already a step toward the goal of reducing post-launch expenses. A virtualized Java environment, for example, can take advantage of more modern code practices for simplicity, reusability, and abstraction, and allows a development team to be comprised of programmers that are in far greater supply. It also adds to the security of the device for many of the same reasons as sandboxing does.

A natural push back might be that a virtualized environment and a sandbox layer would add costly overhead, especially for very small systems like wearables or battery-operated sensors. However, software on the market today can handle these environments with mere kilobytes of flash footprint and very little extra RAM. A typical sandboxing layer, built on top of a virtualization layer, together weigh less than 45 Kb.

So where do the dollars come in?

Three places. First, as soon you deal with sandboxing and virtualization, you enter into an economical industrial process where you reuse the binary building blocks created that have now been made available on your company’s “shelf.” This is in fact one of the biggest assets a company can have. And because virtualization always comes with simulation – the binary building blocks can run on a virtual device as if they were running on a real device – it greatly lowers a project’s financial risk as it’s easier to validate a specification upfront. The gains are well known: more market share, more margin, and more speed.

Second, the combination of virtualization and sandboxing cuts down on money spent keeping the connected device up-to-date and competitive. As stated earlier, when updating/fixing your device is as simple as updating an app, you bypass the time-intensive process of modifying, testing, and validating the entire firmware. The gains apply on R&D and maintenance costs.

Third, it opens entirely new revenue streams post-launch. Being updatable, a device can integrate an ecosystem where “pay per use” of new services is the rule. Your marketing teams can easily test new ideas by adjusting the product to customers’ usages while charging a fee each month; you could sell subscription services, giving access to web-based services like monitoring, alarms, or streamed/downloadable content; or you could tie it into a larger business model – one of our own partners, a subsidiary of a utilities company, used this approach for a device that helps owners manage their gas bill. The gains are new revenues streams.

Best yet, you can change your mind. When designing/updating a smart device is as big of a commitment as it is now, a new competitor or sudden change in consumer demand means cutting deep into your margins to keep up. Post-launch flexibility via sandboxing and virtualization allow you to be as agile as you can be with software products running on much larger devices.

Profitability is the only barrier left on IoT’s path to ubiquity. If I had to guess, I’d say that 80 percent of smart device companies around right now will fail. Not from poor technical knowledge, but by bleeding to death on tiny margins. The current market will only pay so much for a smart device, which is only barely above the (current) average cost of development.

Forcing the market to bear higher prices would take either a monopoly or a miracle. But as we saw, every other path to profitability just takes software. The sandbox/virtualization combo is all about being fast; low cost (by reusing binary building blocks); low risk; agile to the market’s evolution; and optionally allowing new, recurring revenues. It’s a simple software solution for a complex economic problem with no real downsides. The sandbox/virtualization combo is the final “block” needed to turn IoT devices into profitable businesses.

Fred Rivard is CEO of MicroEJ.

MicroEJ

www.microej.com

@microej

LinkedIn: http://ift.tt/2oethap

Google+: http://ift.tt/2oet0nW

YouTube: http://www.youtube.com/user/IS2Tsa

 

via IOT Design

How to make IoT profitable and secure: Building blocks inside of sandboxes

 

From a business perspective, much of Internet of Things (IoT) development isn’t new. Wireless and even mobile Internet connections are mature technologies, and embedded processors go back further still. For industrial markets, the “Industrial IoT” has replaced “SCADA” (supervisory control and data acquisition) as the control system architecture using computers, networked data communications, and process supervisory management to control peripheral devices (the ‘things’ of IoT) in large plants. So what’s new?

What’s new is the combination, and the novel challenges that come with it. Namely, supporting all the services and business models on which companies have come to rely when running on stiflingly designed to cost and/or low-power connected systems.

It’s challenging, time-consuming, and costly enough to make these connected devices in the first place. But that’s only half the story. Like all software products, smart devices need post-launch support and features if they are to stay competitive. Furthermore, any business model that goes beyond the initial product sale — selling apps, streaming content, pay per use, etc. — will require regular updates. And what if that business model changes?

In opposition to the rest of the digital world (PCs, smartphones, servers, etc.), most low-power embedded systems usually don’t run an application platform. This means the product’s software isn’t nicely architected with clear boundaries and interfaces (APIs or ABIs) to separate concerns. Adding/fixing a feature means updating the whole product, with all the extra (exponential) debugging and validation that entails.

It’s not only irritating to the marketing team that never gets their wishes on-time or on-budget, but it is also annoying to the end user who needs to download a large firmware updates and restart their device for it to take effect (often resetting their configurations in the process). To top it all off, embedded programmers aren’t cheap, owing to the difficulty and wide range of expertise needed for the job.

But a slight change in scenery can fix all these problems at once. Specifically, the binary building blocks of IoT should be developed and deployed in individual sandbox environments, all on top of a virtualization layer running semantically managed code. With minimal overhead, embedded binary building blocks (often called apps) sandboxed this way can be placed together to build the device much like one would build a Lego set — a comparison that remains apt post-launch as well, as the device can be later modified as quickly and freely as a mobile app, giving the business that made it more flexibility and fewer expenses.

It’s simple to deploy to a sandbox

This technique is a far cry from how it’s used in DevOps, where sandboxing is the most common. Same goes for its use cases and rationales. But the purpose is much the same. As mentioned, it’s the lack of isolation (cpu, memories, peripherals) in embedded systems that make it so complicated to update. By removing dependencies on both the whole program and its specific toolchain, this combo of virtualization and sandboxing lets developers break a program into smaller binary building blocks and only work on the binary building blocks relevant to the new feature or update.

Consider the ease of updating an app for mobile platforms, where this isolation is already built into the environment. If, say, the creator of Angry Birds wants to add a new bird to the game, she only has to touch the code related to the app itself – no need to access the Android kernel (or even understand it for that matter). To deploy it, she only has to pass a quick round of validation and push the app to the shelf of the targeted ecosystem (here, an application store on the cloud). From there, devices can be updated seamlessly, probably as a background process.

Nothing about updating/upgrading/fixing embedded firmware is that easy, but it’s the experience developers can have when each binary building block is sandboxed. Like their namesake, everything becomes simple and incremental: updates, tuning, debugging, add-ons, etc.

Simplicity for the developers and end users isn’t the only benefit to sandboxing in this way. As you’re able to do partial software updates on embedded systems, it also saves on bandwidth at deployment time. In addition, sandboxing lets you set strong permissions for a given binary building block, saying when binary building blocks can and cannot access other binary building blocks, as well as their access to resources like CPU, RAM, or hardware peripherals.

How to build sandboxes for software building blocks

The virtualization layer beneath the sandboxes is key. It not only enables the sandboxing on small processors that don’t provide memory management units (MMUs), but it is already a step toward the goal of reducing post-launch expenses. A virtualized Java environment, for example, can take advantage of more modern code practices for simplicity, reusability, and abstraction, and allows a development team to be comprised of programmers that are in far greater supply. It also adds to the security of the device for many of the same reasons as sandboxing does.

A natural push back might be that a virtualized environment and a sandbox layer would add costly overhead, especially for very small systems like wearables or battery-operated sensors. However, software on the market today can handle these environments with mere kilobytes of flash footprint and very little extra RAM. A typical sandboxing layer, built on top of a virtualization layer, together weigh less than 45 Kb.

So where do the dollars come in?

Three places. First, as soon you deal with sandboxing and virtualization, you enter into an economical industrial process where you reuse the binary building blocks created that have now been made available on your company’s “shelf.” This is in fact one of the biggest assets a company can have. And because virtualization always comes with simulation – the binary building blocks can run on a virtual device as if they were running on a real device – it greatly lowers a project’s financial risk as it’s easier to validate a specification upfront. The gains are well known: more market share, more margin, and more speed.

Second, the combination of virtualization and sandboxing cuts down on money spent keeping the connected device up-to-date and competitive. As stated earlier, when updating/fixing your device is as simple as updating an app, you bypass the time-intensive process of modifying, testing, and validating the entire firmware. The gains apply on R&D and maintenance costs.

Third, it opens entirely new revenue streams post-launch. Being updatable, a device can integrate an ecosystem where “pay per use” of new services is the rule. Your marketing teams can easily test new ideas by adjusting the product to customers’ usages while charging a fee each month; you could sell subscription services, giving access to web-based services like monitoring, alarms, or streamed/downloadable content; or you could tie it into a larger business model – one of our own partners, a subsidiary of a utilities company, used this approach for a device that helps owners manage their gas bill. The gains are new revenues streams.

Best yet, you can change your mind. When designing/updating a smart device is as big of a commitment as it is now, a new competitor or sudden change in consumer demand means cutting deep into your margins to keep up. Post-launch flexibility via sandboxing and virtualization allow you to be as agile as you can be with software products running on much larger devices.

Profitability is the only barrier left on IoT’s path to ubiquity. If I had to guess, I’d say that 80 percent of smart device companies around right now will fail. Not from poor technical knowledge, but by bleeding to death on tiny margins. The current market will only pay so much for a smart device, which is only barely above the (current) average cost of development.

Forcing the market to bear higher prices would take either a monopoly or a miracle. But as we saw, every other path to profitability just takes software. The sandbox/virtualization combo is all about being fast; low cost (by reusing binary building blocks); low risk; agile to the market’s evolution; and optionally allowing new, recurring revenues. It’s a simple software solution for a complex economic problem with no real downsides. The sandbox/virtualization combo is the final “block” needed to turn IoT devices into profitable businesses.

Fred Rivard is CEO of MicroEJ.

MicroEJ

www.microej.com

@microej

LinkedIn: http://ift.tt/2oethap

Google+: http://ift.tt/2oet0nW

YouTube: http://www.youtube.com/user/IS2Tsa

 

via IOT Design