We’re now just two weeks away from this year’s ORConf, of which AB Open is a proud sponsor, which is taking place on the 8th through to the 10th of September as part of the Wuthering Bytes technology festival programme in Hebden Bridge, and the list of talks has grown to impressive proportions.
Highlights from this year’s ORConf schedule include: an update on the Parallel Ultra Low Power platform (PULP), an effort to build multi-core RISC-V implementations for Internet of Things (IoT) use and which is scheduled to release its PULPino v2 platform by year’s end; a status report from the LibreCores Continuous Integration (CI) project launched at ORConf 2016; a report from lowRISC founder Alex Bradbury on memory handling improvements, Linux kernel support for tagged memory, the RISC-V LLVM backend, and Google Summer of Code (GSoC); and a presentation on the use of the riscv-formal framework for end-to-end formal validation of RISC-V implementations by Clifford Wolf, best known as the creator of the Yosys Open Synthesis Suite.
Full details of the event can be found on the official website, while those looking to attend should complete this form.
Ahead of his ORConf talk, Alex Bradbury has posted a report of the LLVM RISC-V backend efforts – and with full-time effort progress is proving rapid.
“As you will have seen from previous postings, I’ve been working on upstream LLVM support for the RISC-V instruction set architecture,” Alex explains in his post to the RISC-V software development (sw-dev) mailing list. “The initial RFC provides a good overview of my approach. Thanks to funding from a third party, I’ve recently been able to return to this effort as my main focus. Now feels like a good time to give an update on where the RISCV backend is at, and how you can help.”
At present, the LLVM port includes a full and regularly-rebased patch set, of which 16 have been put up for review with 7 committed and 8 awaiting review, and almost the entire GNU Compiler Collection (GCC) torture suite compiles and runs when targeted for 32-bit RISC-V. Next steps include fixing the final issues raised by the torture suite, improving documentation, adding a MAFD MC layer, and beginning benchmarking versus GCC by the end of October.
“I’ve mapped out a number of TODO items here,” Alex adds, “which I hope can help to co-ordinate efforts. Where possible, this indicates the current preferred approach (e.g. we plan to provide RV64 support building upon Krzysztof’s variable-sized register class work).” Alex also advises that “I would really like to avoid setting up a new ‘downstream’, and to use this opportunity to pull in new people to upstream LLVM development,” but warns that the large gap between the reviewed and committed upstream patches and the complete patchset. “If you would like to help, reviewing the remaining patches is an incredibly valuable way to do so.”
In other RISC-V news, Codasip has announced the launch of a new model in its Berkelium (Bk) processor family: the low-cost, low-power Bk-1 designed with embedded and Internet of Things (IoT) projects in mind.
“This processor is perfect for IoT ASIC [Application Specific Integrated Circuit] designers looking to move up from 8-bit processors to 32-bit processors,” claimed Karel Masarik, chief executive and founder of Codasip, at the launch. “Like all members of the Codasip Bk family of processors, the Bk-1 is fully compliant with the RISC-V open standard, assuring customers that their embedded software is truly portable and their designs are not locked into a proprietary instruction set architecture (ISA) such as ARM.”
The Bk-1 design starts at 9,000 gates and a clock speed of 350MHz when implemented on a 55nm process node. Additional features include an optional power management unit, Joint Test Action Group (JTAG) debug controller, and bridges to ARM’s Advanced Microcontroller Bus Architecture (AMBA) for integration into existing designs based on ARM cores. Pricing begins at $40,000, Codsaip has confirmed, with evaluation kits available free of charge through the company’s official website.
RISC-V pioneer SiFive, meanwhile, has announced a partnership with Rambus to add the company’s cryptography technology to the SiFive Freedom RISC-V platform under its DesignShare programme.
“To fulfill our mission to democratise access to custom silicon and upend the stagnant semiconductor industry, SiFive is committed to recruiting leading-edge companies like Rambus to help us revolutionise SoC design,” claimed Naveed Sherwani, newly-appointed chief executive of SiFive, at the announcement. “The growing ecosystem of DesignShare IP providers ensures that aspiring system designers have a catalog of IP from which to choose when designing their SoC. We’re thrilled that Rambus has joined us in enabling innovation through DesignShare, and we look forward to future success together.”
DesignShare, the company explains, allows companies like SiFive and Rambus to provide access to their intellectual property at low or no cost to emerging companies. “Rambus and SiFive share a similar philosophy of easing the path to designing innovative and cost-effective SoCs [Systems on Chips],” added Martin Scott, senior vice president and general manager of Rambus’ security division. “SiFive and Rambus have agreed to partner with an intent of providing chip- to-cloud-to-crowd security solutions that easily integrate with the SiFive Freedom platform and support the open and growing RISC-V hardware ecosystem. Our security cores embedded in Freedom Platform SOCs will enable secure in-field device connection and attestation for updates and diagnostics.”
Details of the technology brought to the table by Rambus have yet to be added to the SiFive website.
Nvidia has launched a revised bundle of its Jetson TX1 high-performance single-board computer, which will be available in limited numbers at a considerable discount from its $499 launch price: the Jetson TX1 Developer Kit SE.
Featuring the same hardware as the original Jetson TX1, the Jetson TX1 SE includes a system-on-module featuring a 64-bit quad-core ARM Cortex-A57 central processor and a 256-core Maxwell graphics processor boasting a claimed teraflop of compute performance, in addition to 4GB of LPDDR4 memory, 16GB of eMMC storage, and gigabit Ethernet connectivity. Designed with computer vision and deep-learning projects in mind, the original kit included a bundled camera module; the SE, by contrast, does not.
The loss of the camera module has led to a considerable decrease in price, dropping from its $499 (around £390) original launch price to $199 (around £155). Nvidia, however, has warned that only limited numbers will be available at this price and with a strict one-per-developer limit – without, sadly, putting a firm figure on availability. Those in the US and Canada can apply to purchase one now, with international availability promised in September.
Jeff Child, editor-in-chief of Circuit Cellar Magazine, has written an impassioned but potentially controversial plea to developers: don’t wait for IoT standards before building something exciting.
“Now that I’m sold that the hype around IoT is justified, I’m intrigued with this question: what specific IoT standards and protocols are really necessary to get started building an IoT implementation,” Jeff writes in his piece, first published in the September issue of the magazine. “From my point of view, I think there’s perhaps been too much hesitation on that score. I think there’s a false perception among some that joining the IoT game is some future possibility — a possibility waiting for standards.
“IoT requires the integration of edge technologies where data is created, connectivity technologies that move and share data using Internet and related technologies and then finally aggregating data where it can be processed by applications using Cloud-based gateways and servers. While that sounds complex, all the building blocks to implement such IoT installations are not future technologies. They are simply an integration of hardware, software and service elements that are readily available today. In the spirit of Circuit Cellar’s tag line ‘Inspiring the Evolution of Embedded Design,’ get inspired and start building your IoT system today.”
CERN Openlab student Lamija Tupo has begun writing a series on the use of IoT technologies in the control systems of the Large Hadron Collider, the world’s largest and most powerful particle accelerator.
Lamija’s first post discusses the selection of suitable technologies, in particular the framework: “We started the project with a minimal set of requirements in order to enable the communication between sensors and analytical applications through an IoT architecture,” Lamija writes. “The second step was comparing the available open source frameworks that could fit the initial given requirements.”
Having selected AllJoyn, Lamija then goes into more depth on exactly what made that framework stand out over rivals including OpenSensors and IoTivity. “Adding a new device to the network is very easy, as one device can be defined as ‘onboarder’ and it registers new devices ‘onboardees’ into the network. The devices use interfaces for bridging their differences, and these interfaces, along with services, can be private or common for all devices. AllJoyn also has mDNS implemented, which is the preferred service discovery for this project. It also has implemented a notification service, which is needed for this project.”
Interested parties can follow the blog post series via Lamija’s author page on Intel’s Developer Zone.
Improbable Studios’ Joe Broxson has published plans for a LoRa Sniffer based on the low-cost Adafruit Feather development board, offering those working with the long-range low-power radio network standard a simple handheld tool for diagnosis and experimentation.
“Before building a mesh with a bunch of nodes, I decided it would be interesting to see how much LoRa is being used in my area. How is this best achieved? By building a portable sniffer/scanner of course,” Joe writes of his inspiration. “This device should be in a form factor that I can set on a windowsill, put in my pocket when I walk around, or on my dash while I drive. When I saw Adafruit’s 3D printable case for their TFT FeatherWing, I was intrigued. This looked like the perfect enclosure for my project. This, along with a Feather M0 with RFM95 LoRa Radio, an antenna, battery, and switch makes a complete solution.”
The system scans LoRa channels and offers a live view of monitored packets on its built-in TFT display, while also logging data to an SD card for later analysis. The full source code has been published, though under an unspecified licence.
Finally, Ken Shirriff has written of his discovery and reverse-engineering of a counterfeit semiconductor with a difference: it’s designed to mimic a device from the 1960s.
“A die photo of a vintage [7400-series] 64-bit TTL RAM chip came up on Twitter recently, but the more I examined the photo the more puzzled I became,” Ken writes. “The chip didn’t look at all like a RAM chip or even a TTL chip, and in fact appeared partially analog. By studying the chip’s circuitry closely, I discovered that this RAM chip was counterfeit and had an entirely different die inside.”
While it’s common – and has been since the 7400-series TTL chips were shiny and new – for shady vendors to change the markings on chips to pass off cheaper or lower-specifications alternatives as higher-priced parts, Ken’s painstaking analysis revealed that the chip wasn’t even memory: it was a Mostek MK5085 touch-tone dialler module from 1975, relabelled as an entirely non-functional RAM chip.
Ken’s full post is a recommended read, both for the steps taken to analyse the device and its function down the claims made by the seller as to how exactly a DTMF tone generator came to be sold as 7400-series TTL RAM.