AWS releases Amazon Kinesis Analytics

Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), today announced the availability of Amazon Kinesis Analytics, a fully managed service for continuously querying streaming data using standard SQL. Using Kinesis Analytics, developers can write standard SQL queries on streaming data and gain actionable insights in real-time, without having to learn any new programming skills. To get started with Kinesis Analytics, visit http://ift.tt/2aPdbDn.

Today’s digital businesses generate massive quantities of streaming data from diverse sources such as website and mobile app click-streams, sensors embedded in connected devices, and IT system log files. Being able to continuously query and gain insights from this information in real-time – as it arrives – can allow companies to respond more quickly to business and customer needs. However, existing data processing and analytics solutions aren’t able to continuously process this “fast moving” data, so customers have had to develop streaming data processing applications – which can take months to build and fine-tune – and invest in infrastructure to handle high-speed, high-volume data streams that might include tens of millions of events per hour. Now, with Kinesis Analytics, continuously querying streaming data in real-time is as simple as writing SQL queries. Kinesis Analytics integrates with Kinesis Streams and Kinesis Firehose and can automatically recognize standard data formats within data streams and suggest a schema, which is easy to edit using Kinesis Analytics’ interactive schema editor. Kinesis Analytics automatically provisions, deploys, and scales the resources required to continuously run queries, delivering processed results directly to AWS services, including Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and Amazon Elasticsearch Service.

“AWS’s functionality across big data stores, data warehousing, distributed analytics, real-time streaming, machine learning, and business intelligence allows our customers to readily extract and deploy insights from the significant amount of data they’re storing in AWS,” said Roger Barga, General Manager, Amazon Kinesis. “With the addition of Amazon Kinesis Analytics, we’ve expanded what’s already the broadest portfolio of analytics services available and made it easy to use SQL to do analytics on real-time streaming data so that customers can deliver actionable insights to their business faster than ever before.”

Customers can get started with Amazon Kinesis Analytics in minutes by going to the AWS Management Console and selecting a Kinesis Streams or Kinesis Firehose data stream. Kinesis Analytics ingests the data, automatically recognizes standard data formats, and suggests a schema that can be refined using the interactive schema editor. Next, customers use the Kinesis Analytics SQL editor and built-in templates to write SQL queries, and point to where they want Kinesis Analytics to load the processed results. Kinesis Analytics takes care of everything required to continuously query streaming data, automatically scaling to match the volume and throughput rate of incoming data while delivering sub-second processing latencies.

MLB Advanced Media (MLBAM) is a full service solutions provider that delivers world-class digital experiences through all forms of interactive media. “We capture telemetry and clickstream data from our video streaming clients using Amazon Kinesis Streams. We process that data in real-time to monitor the video streaming experience we provide our customers,” says Rob Goretsky, Director of Data Engineering, MLBAM. “Insightful analysis of streaming data previously involved building dedicated data pipelines to derive specific metrics. We can now interactively develop queries in minutes using Kinesis Analytics. The familiar SQL API allows our data engineers fast and easy access to the data, and the interactive console provides immediate feedback. We think Kinesis Analytics has promise to help further drive down the cost and realization of these data within our organization, helping us to provide a better customer experience.”

JustGiving is one of the world’s largest social platforms for giving that’s helped 27.7 million users in 196 countries raise $4.1 billion for over 27,000 causes. “We capture clickstream data from web and mobile clients using Amazon Kinesis Streams,” said Richard Freeman, Ph.D., Lead Data Engineer, JustGiving. “Building solutions to process that data in real-time with custom software takes weeks or months to setup. With Kinesis Analytics, we can interactively build and deploy streaming analytics without these development costs or ongoing operational burden. Fully managed services like Kinesis Analytics will allow our engineers and developers to focus on improving the experience of charities and givers.”

Customers can launch Kinesis Analytics using the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. With Kinesis Analytics, customers pay only for the resources that their stream processing applications actually use. There is no minimum fee or setup cost. Amazon Kinesis Analytics includes technology components licensed from SQLstream. Kinesis Analytics is available in the US East (N. Virginia), US West (Oregon), and EU (Ireland) Regions and will expand to additional Regions in the coming months.

About Amazon Web Services

For 10 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 70 fully featured services for compute, storage, databases, analytics, mobile, Internet of Things (IoT) and enterprise applications from 35 Availability Zones (AZs) across 13 geographic regions in the U.S., Australia, Brazil, China, Germany, Ireland, Japan, Korea, Singapore, and India. AWS services are trusted by more than a million active customers around the world – including the fastest growing startups, largest enterprises, and leading government agencies – to power their infrastructure, make them more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit www.amazon.com/about.

Read more at http://ift.tt/2aTCwq0[…]

via IOT Design

Locus Technologies creates IoT interoperability with Locus Platform



MOUNTAIN VIEW, CA — Locus Technologies (Locus), the industry leader in cloud-based EHS software, announced today that its multi-tenant SaaS Platform fully interoperates with the Internet of Things (IoT). The company has been the pioneering innovator in the EHS software space since 1999 when it first introduced its Automation and Data Management Groups, which used Internet-based technologies to manage and control vast amounts of data generated at the company’s customer sites.

Locus’ automation technologies have evolved over the years to encompass the vast array of Internet-connected devices, sensors, programmable logic controllers, and other instruments to gather and organize large amounts of streaming data.

The IoT interconnects uniquely identifiable embedded computing, testing, and monitoring devices within the existing Internet infrastructure and software platform. Locus IoT services offer connectivity beyond machine-to-machine communications and cover a variety of protocols, domains, and applications.

“The IoT is one of the fastest-growing trends in tech. When applied to the environmental monitoring industry, there is an overwhelming influx of information that has to be dealt with; many companies are concerned that the sheer volume of data will render the information useless. For that reason, Locus invested in smart software and intelligent databases to deal with this new trend, long before IoT had a common name. We aspire to change the face of the environmental monitoring industry,” said Neno Duplan, CEO of Locus.

In any industry, when all incoming data are connected and centrally accessible through a multi-tenant SaaS application, the flow of information is much more efficient and effective. For example, instead of having a separate data collection protocol from software applications for water quality management, waste management, GHG management, EHS compliance and incident management, a company can have all emissions-related records — regardless of whether they originated in the laboratory, field, or Internet-connected monitoring device — in a single system of record. From this single system of record, they can manage compliance activities, perform data gathering and monitoring, manage water treatment systems remotely, and manage resources for sustainability reporting at the same time. Adopting such a structure offers Locus’ customers the ability to converge all incoming sources of information to create a much-needed integrated enterprise platform for EH&S+S management.

At the crux of this integration is Locus’ highly scalable and end-user configurable Locus Platform. The interoperability combines the Locus platform as a service with its automation, mobile, and IoT platforms. The combined IoT suite will be hosted on Locus’ cloud.

“By combining our cloud platform and Internet of Things (IoT) platforms to make them interoperable, we provide the single platform for our customers that helps them lower their operational costs, reduce cycle time, and ultimately become better stewards of the environment. This integration will give our customers more analytics from connected devices,” added Duplan.

About Locus Technologies

Locus Technologies is a leading environmental and sustainability software company that has been helping companies achieve environmental and compliance business excellence since 1997. Public and private companies, such as Chevron, Honeywell, Sempra, Monsanto, DuPont, San Jose Water Company and Stanford Linear Accelerator Center, rely on Locus to manage their EHS compliance, water quality, air emissions, greenhouse gasses, discharges, as well as remediation efforts and environmental impacts. Locus provides mobile and cloud-based multi-tenant Platform-as-a-Service (PaaS) software solutions to address the EHS and Sustainability industry’s most pressing information management challenges. For more information, visit locustec.com or email info@locustec.com.

via IOT Design

Sub-threshold circuitry: Making Moore’s about power, not performance

As silicon geometries approach the edge of physics, a new rule of thumb is poised to govern the computing industry: “Thou shalt reduce power consumption by 50 percent every two years.” How could that be possible? Sub-threshold voltage circuitry.

The ULPBench is a standardized benchmark developed by the Embedded Microprocessor Benchmark Consortium (EEMBC) for measuring the energy efficiency of ultra-low power (ULP) embedded microcontrollers (MCUs). The benchmark ports a normalized set of MCU workloads to a target, such as memory and math operations, sorting, and GPIO interaction. These workloads form the basis for analyzing the active and low-power conditions of 8-, 16-, or 32-bit MCUs, including active current, sleep current, core efficiency, cache efficiency, and wake-up time. The results are then calculated using a reciprocal formula (1000/Median of 5 times average energy per second for 10 ULPBench cycles), yielding a score based on the amount of energy consumed during workload operation – the ULPBench.

In November 2015, Ambiq Micro (www.ambiqmicro.com), a semiconductor vendor out of Austin, TX, submitted its Apollo MCU for testing against the ULPBench, scoring 377.50 (the reciprocal formula means the higher the benchmark score, the better), more than twice that of the previous bellwether, STMicroelectronics’ STM32L476RG. Depending on the direction of an application, this 2x power savings can be repurposed to extend battery life or add new features (Sidebar 1). According to Scott Hanson, Ph.D., Founder and CTO of Ambiq Micro, advances in energy efficiency such as those being realized today could lead to a new iteration of Moore’s law in which the power consumption of embedded microprocessors is cut in half every couple of years.

“What we’re seeing is every one of our customers wants to one-up their product from last year,” Hanson says. “If they want to add some great new feature – maybe last year it was heart rate monitoring and this year a microphone to do some always-on voice processing – that all takes CPU cycles. Today a lot of these companies are only running effectively at 1 million instructions per second (MIPS) or so, so very few cycles per second. Maybe they want to make a leap to 10 MIPS and, suddenly, MCU power goes from being a 25 percent contributor to a 75 percent contributor. That’s a problem.

“Much in the same way that Moore’s law was about adding more transistors in the same area, we have to be very focused on driving energy down 2x or 4x every single year,” he adds.

Sidebar 1 | Wearable energy efficiency beyond the battery

It’s true: charging a Fitbit couldn’t be simpler. Still, for a wearable device, especially one that requires 24-hour wear to take full advantage of its capabilities, charging even once a week feels like a chore. When the battery drains, it’s like your carriage turned back into a pumpkin. What was once the leading authority on your personal fitness level is now just an ugly, slightly uncomfortable bracelet. It’s so disenchanting that every time you put it on the charger, it might not make it back onto your wrist.

This is exactly the problem that Misfit wanted to remedy. Their goal? Produce a device that never needs to be removed. The result? Their original tracker, the Misfit Shine, boasted a six-month battery life; most activity trackers need to be charged at least once a week.

But while the Shine outpaced most other trackers in battery life, many users found the functionality to be too limited. The Shine was equipped with a Silicon Labs EFM32 MCU, Bluetooth Low Energy (BLE), and a 3-axis accelerometer. That put the Shine about on par with Fitbit’s most basic offering, the Fitbit Zip, which, while not intended to track sleep, offers similar battery life and a more useful display. The next-generation Shine would need to add functionality without backpedaling on their commitment to long-term wear.

Enter Ambiq Micro’s Apollo MCU. The Apollo in the Misfit Shine 2, twice as powerful as the EFM32 MCU in the original Shine, allowed for the addition of a vibration motor for call and text notifications; multicolored LEDs and a capacitive touch sensor for a more clear, interactive user interface; and a magnetometer to improve the accuracy of activity tracking. Thanks to Ambiq’s SPOT platform, the Apollo also boasts industry-leading energy efficiency, mitigating tradeoffs in power consumption brought on by the added functionality to retain the six-month battery life of the Shine 2’s predecessor.

[Figure 1 | The Misfit Shine 2 has a six-month battery life. Similar activity trackers, like Fitbit’s newest and most advanced “everyday” offering the Fitbit Alta, require a charge about every five days.]

But while the Apollo offers unparalleled power consumption compared to similar MCUs, the processor isn’t the only place where battery life can be extended. Other components, such as sensors and wireless chips, could also leverage sub-threshold circuitry such as that used on the Apollo MCU to reduce power, and software can be optimized to further increase energy efficiency.

The way Ambiq’s Chief Technology Officer Scott Hanson sees it, “We’re constantly going to be talking about how we need to be lower energy and the batteries need to be better. Every component needs to be more efficient than it is today. We’re always going to be under that pressure.”

Sub-threshold voltage circuitry demystified

What enables the Apollo MCU to achieve such notable ULP performance metrics is the use of sub-threshold circuitry, which operates on supply voltages below the threshold voltage of typical 1.8V or 3.3V MCUs. Threshold voltage represents the minimum gate-to-source voltage required to change a transistor’s state from “off” to “on” or drive a signal “low” or “high” for logic purposes. In a standard 1.8V integrated circuit (IC), significant current can be required to perform these state changes, which directly correlates with power consumption as dynamic energy – the energy associated with turning transistors on or off – is calculated by squaring the operating voltage (Figures 1 and 2).

[Figure 1 | A typical 1.8V IC requires a significant amount of current to achieve a state change.]

[Figure 2 | Dynamic power consumption, or the energy required to switch a transistor on or off, is responsible for the majority of the energy used by ICs, particularly at higher operating voltages.]

Ambiq, however, uses their Sub-threshold Power Optimized Technology, or SPOT, to operate transistors at voltages of less than 0.5V (sub-threshold), which provides a couple of benefits (Figure 3). First, state switching at these lower operating voltages makes for lower dynamic energy consumption; Second, the leakage current (see Figure 2) of “off” transistors can be harnessed to perform most computations, in essence recapturing previously lost power. In the case of Ambiq’s 32-bit ARM Cortex-M4F-based Apollo MCUs running at up to 24 MHz, the result is a platform that consumes 34 µA/MHz executing from flash and sleep currents as low as 140 nA, both of which are lower than competitive Cortex-M0+ offerings, the company says.

[Figure 3 | Ambiq’s SPOT platform operates transistors at sub-threshold voltages of less than 0.5V to achieve significant energy savings compared to standard IC implementations.]

“What we effectively do is we take a normal microprocessor, including both the analog elements and the digital elements, and run them at much lower voltage,” Hanson explains. “On the digital side we dial down the voltage very low, anywhere between 200 mV and 600 mV, depending on the type of device you’re using. That requires a system-wide change in how you design the chip, from the standard cells to how you do simulations to how you do time enclosure to how you do voltage regulation – all of that has to be modified specifically to run at lower voltage. And then on the analog side we run at extremely low gate-to-source voltages, so we’ll use tail currents in amplifiers that are as low as a few picoamps.”

Sub-threshold circuitry is not without its challenges, however. While it can deliver exponential gains in energy efficiency, such low-voltage operation precludes processor speeds above a couple hundred MHz (for now) and also makes for circuits that are inherently more sensitive to fluctuations in temperature and voltage (Figure 4).

“This obviously comes with its share of problems,” Hanson says. “We’re very sensitive to temperature fluctuations and voltage fluctuations and process fluctuations, but we have a pretty wide range of techniques that we use to address that, for instance with proprietary analog circuit building blocks. Every analog circuit that you can read about in a textbook is based on saturated transistors and bipolar transistors, not sub-threshold-based MOSFETS, so we have had to reinvent a lot of the underlying analog circuit building blocks such that they work at extremely low sub-threshold currents.

[Figure 4 | Besides exponential current fluctuations in response to changes in operating voltages at sub-threshold levels, slight temperature variations can lead to radical current deltas as well. This mandates significant compensation in sub-threshold circuitry.]

“Internally, we’re doing a lot of voltage conversion to get voltage down, we’re managing all the process variations, voltage variations, and temperature variations, and the result is dramatically lower energy,” Hanson continues. “There’s not any one silver bullet. There’s a wide range of things that we do to make sub-threshold possible.”

More than Moore’s

Advances in sub-threshold circuitry and voltage optimization will become more prevalent as Moore’s law continues to eke out smaller process nodes and the gate-to-source channels of MOSFETs shrink in size, necessitating lower and lower supply voltages as well as increasingly smaller thresholds. As Moore’s law advances towards the limits of physics, power consumption will become as much of a tenet of computing’s first amendment as performance, if not replace it.

“One of the challenges we see for Ambiq at a system level is that, as we’ve driven energy efficiency of the processor down, the overall contribution of the microprocessor to the system has gone down to the point where customers say, ‘Hey, you knock another 10x out of the energy, it doesn’t make a difference because you’re already very, very low,’” says Hanson. “What they really need is for us to knock that energy down by 10x, but they need a commensurate increase in performance to take advantage of that. That is to say, if I reduce energy by 10x, they want to see an accompanying 10x increase in performance so they can stay in the same power envelope but dramatically increase the processing power. We focus a lot on that as a company: How can I both increase performance and continue to reduce energy?

“We saw Moore’s law in the high-performance computing industry with PCs, notebooks, and phones. We saw Moore’s law lead the way in allowing us to deliver something better and more incredible every year so we could add more features, but we were kind of stuck in the same form factors, the same battery life, etc.,” Hanson continues. “We’re going to see the same thing happen with power consumption, and that means that we’re constantly going to be talking about how we need lower energy in every component.”

 

via IOT Design

Microchip Releases Industry’s First End-to-End Security Solution for IoT Devices Connected to Amazon Web Services’ Cloud



CHANDLER, Ariz. — Microchip Technology Inc. (NASDAQ: MCHP), a leading provider of microcontroller, mixed-signal, analog and Flash-IP solutions, today announced the industry’s first end-to-end security solution for Internet of Things (IoT) devices that connect to Amazon Web Services IoT (AWS IoT). Microchip and AWS collaborated to develop this integrated solution to help IoT devices quickly and easily comply with AWS’s mutual authentication IoT security model. Using Microchip’s new security solution will help companies implement these security best practices from evaluation through production. The solution adds a high level of security, simplifies the supply chain, and is now one of the fastest ways to connect to the AWS Cloud.

Currently, third party manufacturers of devices that connect to AWS IoT service must take specific actions to comply with the advanced security model. First, they must pre-register their security authority to AWS servers in order to establish a trust model. Second, for each IoT device they must generate unique cryptographic keys that are mathematically linked to the pre-registered security authority. Finally, the unique device keys must remain secret for the life of the device. In volume production, the generation and secure handling of these unique keys can be a daunting challenge in the chain of manufacturing especially where third parties with different trust and compliance levels are involved.

Microchip’s end-to-end security solution handles this process during three production steps. First, the AT88CKECC kit will allow customers to meet the security standard of AWS’s mutual authentication model and easily connect to the AWS IoT platform during the evaluation and engineering phase. Second, the AWS-ECC508 device assists with meeting security standards during the prototyping and pre-production phase. Finally, devices will be customized for production stages to ensure information security in customer applications.

Customers simply solder the device on the board and connect it over I2C to the host microcontroller (MCU) which runs an AWS Software Development Kit (SDK) leveraging the ECC508 device for AWS IoT. Once this is complete, there is no need to load unique keys and certificates required for authentication during the manufacturing of the device as the AWS-ECC508 is pre-configured to be recognized by AWS without any intervention. All the information is contained in a small (3×2 mm), easy to deploy crypto companion device.

AWS and the ECC508 device naturally complement each other with comprehensive mutual authentication security capabilities. The device has strong resistance against environmental and physical tampering including countermeasures against expert intrusion attempts. In addition, the device features a high quality random number generator, the internal generation of secure unique keys and the ability to seamlessly accommodate various production flows in the most cost-effective manner. A typical IoT device consists of a small [8-bit] microcontroller, and is battery powered. It is typically constrained for resources like central processing unit (CPU) performance to provide low latency responsiveness, memory and code space for security protocols and for how much power they can consume in order to preserve battery life. The ECC508 device has a low-power processor-agnostic cryptographic acceleration for compatibility with the widest range of resource constrained IoT devices.

“We understand the often complex nature of implementing AWS mutual authentication in microcontrollers,” said Nuri Dagdeviren, vice president and general manager of secure products at Atmel, a wholly-owned subsidiary of Microchip. “The customer would need to have some understanding of how to secure a software implementation, and this often creates a huge barrier. We have had a long standing relationship with AWS and are thrilled to have to the opportunity to work with the world’s largest cloud provider to build a solution that helps our customers easily and securely connect to the AWS Cloud.”

“We have a strong relationship with Microchip and we are very excited to be able to offer a world-class solution to anyone who wishes to deploy secure and scalable IoT solutions on our cloud services,” said Marco Argenti, vice president, Mobile and IoT, Amazon Web Services, Inc. “For all companies we work with, embracing security best practices are an essential step in achieving our mutual goal of offering customers the best and most secure IoT platform available. We believe this new solution will be one of the simplest and most cost-effective ways for our customers to comply with our security best practices.”

For more information about Microchip’s end-to-end security solution for AWS Cloud connected devices, visit: http://ift.tt/2aOi6hR

Pricing and Availability

The AWS-ECC508 kit (part # AT88CKECC-AWS-XSTK) is available today at $249 each.

The AWS-ECC508 (part # ATECC508A-MAHAW-S and ATECC508A-SSHAW-T) is available in UDFN and SOIC packages and is available today for sampling and volume production starting at $0.60 each in 10,000 unit quantities.

For additional information, or to purchase the kit, visit http://ift.tt/2aOi6hR. To purchase other products mentioned in this press release, contact one of Microchip’s authorized distribution partners.

Follow Microchip:

 RSS Feed for Microchip Product News: http://ift.tt/2aPU8Gd

 Twitter: http://twitter.com/microchiptech

 Facebook: http://ift.tt/28ZSC1x

 YouTube: http://www.youtube.com/user/microchiptechnology

About Atmel

Atmel is a wholly-owned subsidiary of Microchip Technology Inc. (NASDAQ: MCHP).

About Microchip Technology

Microchip Technology Inc. (NASDAQ: MCHP) is a leading provider of microcontroller, mixed-signal, analog and Flash-IP solutions, providing low-risk product development, lower total system cost and faster time to market for thousands of diverse customer applications worldwide. Headquartered in Chandler, Arizona, Microchip offers outstanding technical support along with dependable delivery and quality. For more information, visit the Microchip website at www.microchip.com.

via IOT Design

Check these boxes before deploying IoT devices

There was a time when isolation helped ensure IT system security. Mainframes lived in glass rooms accessible to a few carefully screened attendants. If there was any kind of network, it existed within the building and provided wired connectivity for a flock of dumb terminals.

The big change came with the arrival of the Internet. Suddenly, the network covered the planet, and the same web that provided 24/7 access for remote offices, customers, partners, and friends substituted once-secure physical walls for loose affiliations open to any interested party that cared to visit. It made security a critical issue and spawned an entire industry.

With the advent of embedded technology, intelligence continued migrating from fortified data centers to millions of devices, each with its own functions and place on the web. This spread presented a target-rich environment for bad actors around the world. These weren’t just criminals, though there are plenty of those; they consisted of anyone from state actors and terrorists to curious individuals simply looking for a challenge or a place to play. Whoever they may be, these individuals could maliciously or even unintentionally alter, steal, or otherwise tamper with content.

With Internet of Things (IoT) nodes showing up in home appliances, medical devices, kiosks, vehicles, buildings, utilities, and other sorts of infrastructure, the number of IoT devices will be huge. They’ll be sensors collecting, storing, and transmitting data and components of a smart grid. They’ll send anything from occasional alerts to large streams of real-time data and will link vast arrays of unattended devices into networks. They’ll differ in size and capabilities, but the one thing they will all have in common is connectivity. And in reality, if it’s connected, it’s vulnerable. The big question today is, how do we protect the growing number of small, relatively inexpensive, autonomous devices that make up the IoT?

The challenges

Embedded devices in IoT applications face assault from many directions and in many forms. Being connected devices, they can be reached via wired or wireless connections. And they can still be attacked “the old fashioned way” by direct physical access, which is a greater problem than ever because there are so many of them in so many places and because they usually operate unattended.

Before deploying an IoT device, consider these issues first:

  • Systems on the IoT must be able to trust their remote devices, recognize that those devices are legitimate, trust that any access to the device has been by legitimate users, and trust that although users may be able to put data/images on the device, they haven’t tinkered with the device’s manufacturer code.
  • Systems receiving IoT data can be fooled by a device that pretends to be authorized, but is actually controlled by a hacker sending malicious or altered code. To do this, the hacker would only have to go to a manufacturer’s website, download a copy of firmware, upload it to a device he or she controlled, and send spurious data from what appeared to be an authorized device.
  • Like ants at a picnic, hackers are always looking for new ways to attack legitimate systems. Systems, in turn, require regular updates to patch chinks in their armor. Updating a single system can be easy, but with large numbers of scattered IoT devices, keeping up can be daunting. The risk is that systems either fall behind on critical updates or require large amounts of support time to stay up-to-date.
  • IoT devices often store data, which is typically protected by a key. In many cases, a hacker who gains access to the device can also, with a little added effort, find the key needed to decrypt the stored data.
  • With devices spread across the globe, often in remote locations, a hacker can actually physically break into a device, plugging in to gain access through the JTAG hardware engineering port, through serial ports for admin, through network ports, or through an Ethernet port. It’s a little more work for the hacker—typically not an option for an amateur hacker—but the damage can be just as great. Systems can currently provide users with keys to securely access ports, but this typically must be done on a labor-intensive, user-by-user basis.
  • Because most IoT devices will be made as small and inexpensive as possible, there’s security functionality that just won’t fit on a device’s main processor and, over time, security demands will continue to grow. Systems need a road map defining where increased security capabilities will reside in the foreseeable future.

The checklist

  • To ensure that only legitimate users access the device, the system should authenticate every time it starts up or is accessed. This is done via a manufacturer-supplied certificate that customers must present when accessing the device and to which hackers won’t have access.
  • To prevent hackers from creating counterfeit devices to send their own version of data to the system, every legitimate device must have its own unique digital signature. This isn’t available in downloadable firmware and can’t be faked, so only data from genuine devices will be accepted.
  • To prevent falling behind on firmware updates or placing an unnecessary burden on users, a system should regularly initiate a check for updates and, if any are found, automatically download them to ensure that the device is as thoroughly protected as possible, preventing both delay and burden on the user.
  • To keep hackers from accessing stored data, the system must put the decryption key in a secured lockbox. This is a second level of security beyond simply requiring a key to access stored data. In many cases, keys can be easily found by anyone who can access the data itself, which is like locking your door and hiding the key under the mat. Second-level protection is like the lockbox a real estate agent hangs on a property’s front door; the lockbox requires a code of its own to access the door key.
  • To protect against hackers who can physically access device, a system should challenge anyone who attempts access through any of its physical ports. This can be achieved through the same authentication process used with access over a network.
  • IoT technology is still in its early stages and will almost certainly face greater security challenges than it does today. To maintain the highest level of security and long life without overburdening devices’ main processors, systems will begin to incorporate hardened co-processors dedicated to security functions. And since IoT devices are, by definition, connected, additional security features will have to be provided over, and resident in, the cloud. In short, today’s best security won’t be sufficient tomorrow, and you’ll need capacity to accommodate new and better protection.

The bottom line

Unfortunately, perfect security doesn’t exist. Witness recent break-ins at well-protected government and corporate sites and the FBI’s highly publicized hacking of an iPhone. The goal isn’t to make interference impossible, but rather to make it difficult. IoT systems and devices may be tempting for hackers, but they aren’t Fort Knox or the Bank of England’s vault, and successful security may simply entail using the toughest available protection and counting on rational hackers to look elsewhere for a softer target.

Donald Schleede is the information security officer at Digi International, a Minnesota-based manufacturer of embedded systems, routers, gateways, and other communications devices for machine-to-machine systems.

http://www.digi.com/

via IOT Design

PICMG COM Express Type 7 includes 10GbE

A new COM Express specification developed by the PICMG is soon to be released. One of the upcoming major advancements is the COM Express Type 7 pinout supporting up to four 10GbE-KR interfaces, making it ideally suited for the new edge node servers required by IoT and Industry 4.0 applications.

By 2020, it is expected that data created and copied annually (much of it from IoT, cloud, and big data applications) will increase to around 44 zettabytes. That’s 44 trillion gigabytes. In order to minimize traffic between clients, central clouds, and data centers, data should be processed as close as possible to its place of origin or inquiry location. IoT and Industry 4.0 applications need this decentralized capacity to process sensor and measurement data in real time. In particular, data-intensive video streaming applications need local, virtualized systems with multiple cores and larger caches to achieve the required transcoding performance. This is not only relevant for video-on-demand in the telecom segment, but also for medical and security applications. Another important application for edge servers is deep packet inspection, which ensures data security and optimal quality of service.

The structure of the network is thus transformed to an infrastructure which uses high-performance edge node servers distributed over the whole network that perform their functions close to the end user, reducing latency and backbone traffic. A high bandwidth of 10GbE is essential for such appliances. Examples of system-on-chip solutions which provide this performance at a relatively low power consumption of 65 watts or less are the Intel Xeon processor D family and the Intel Atom processor C family. ARM designs are also interesting, such as the recently launched AMD Opteron A1100 ARM processor and various PowerPC platforms, such as the QorIQ family from NXP. Developers do not, however, need to decide today which processor is the best platform. They can stay flexible using the new COM Express specification with Type 7 pinout.

[Figure 1 | The new Type 7 pinout provides up to four 10GbE-KR ports for bandwidth- and data-intensive designs.]

The new Type 7 pinout

One of the most fundamental innovations of the COM Express Type 7 pinout is the support of up to four 10GbE interfaces, essential for the next generation of edge node appliances. On the module they are implemented as 10GbE-KR, i.e. as single backplane lanes according to IEEE 802.3, paragraph 49. The physical implementation of the 10GbE interfaces takes place on the carrier board itself. Developers can define the signal transmission as optical (SFP+), copper cable (T). This provides flexibility for new designs.

[Figure 2 | The new Type 7 pinout provides up to 4x 10GbE-KR for bandwidth- and data-intensive designs.]

To a large degree, the Type 7 pinout follows that of Type 6. To create the capacity for up to four 10GbE interfaces, the signal lines of the digital display interfaces (DDIs) on the CD connector of the Type 6 pinout are used. Since most new edge node appliances do not require high-resolution, multi-screen outputs, multiple DDI interfaces are not needed for these applications. For local management and maintenance consoles, the eDP/LVDS interface is still available and located on the AB connector. If developers wish to retain DDI, a hybrid solution with 2x 10GbE and a DDI interface will be supported by the specification. Together with the eDP/LVDS interface on the AB connector, solutions that offer both high network and video performance are also possible.

No change is expected to the existing 1 GbE port on the Type 6 AB connector. However, there are discussions on expanding this port to 10GbE, including the physical definition and PHY for 10GBASE-T. This matter can only be decided when conventional non-server processors such as the Intel Core processor family have comprehensively integrated 10GbE including PHY, which is not likely to take place before 2018.

1U with up to 10 modules

With COM Express Type 7, the benefits of the COM Express specification are made available to new, high-bandwidth, data-intensive edge node server applications. Thanks to the core-module design concept and standardized pinout, system design becomes independent of processor technology. Systems can be upgraded with a simple module exchange. And thanks to its compact dimensions, COM Express modules enable a high packing density: 10 modules can be integrated in a 1U enclosure, providing a maximum combined data transfer rate of 0.4 terabits per second. The modular COM Express design makes this type of solution highly flexible and scalable, minimizes development costs, and shortens time-to-market. OEMs also gain more design security and can use their designs for longer, thereby increasing their return on investment.

[Figure 3 | ADLINK COM Express Type 7 block diagram. Check the ADLINK website for updated product information.]

Further upgrades

The new specification is expected to become available at the end of Q2 2016, with the Type 7 CD connector pin assignment expected to be released at the end of Q1 to allow vendors to begin working on new designs. The COM Express Rev. 3.0 specification will include both the new Type 7 pinout as well as expected changes to existing pinout types to address minor issues and to migrate from the legacy LPC bus to the eSPI bus. Greater unification on the AB connector between the existing Type 6 and Type 10 pinouts in order to simplify design considerations for manufacturers is also in the works.

Jeff Munch, Chief Technology Officer of ADLINK, has been Chairman of most PICMG COM Express subcommittees since 2009. In this role he has headed the last five of seven COM Express initiatives, including COM Express Type 7, and has brought together companies such as Advantech, Congatec, Kontron, MSC/Avnet and Intel.

via IOT Design