Esperanto Technologies, a company focused on developing energy-efficient many-core accelerators for artificial intelligence and machine learning applications, has announced the closure of a whopping $58 million in Series B funding – more than 10 times its previous investment rounds combined.

“Despite still operating largely in stealth mode, we appreciate this strong show of support from strategic and VC investors who had confidential briefings about our plans and believe we have a compelling solution for accelerating ML applications,” claims president and chief executive Dave Ditzel, co-author of the paper The Case for RISC and founder of low-power x86 specialist Transmeta. “Esperanto has assembled one of the most experienced VLSI [very large scale integration] product engineering teams in the ML [machine learning] industry, and we believe that will be a differentiating factor as we drive toward our 7nm products.”

It’s the push to 7nm which has prompted the funding round, the company explains: it aims to build its accelerator hardware, the ET-Minion, on an energy-efficient 7nm process node with more than a thousand RISC-V cores per chip. As well as the open RISC-V instruction set architecture (ISA) itself, the company is leveraging other open standards including the Open Compute Platform (OCP), Pytorch machine learning framework, Glow machine learning compiler, and the Open Neural Network Exchange (ONNX).

More information on the company’s technology, a release date for which has not yet been provided, can be found on the official website.

Hex Five Security, Andes Technology, and Gowin Semiconductor have jointly announced a collaboration which will see the former’s trusted execution environment added to the middle’s N(X)25 RISC-V cores on the latter’s GW-2A field programmable gate array (FPGA) family.

“The cost of a robust security implementation on RISC-V is now negligible – the future of RISC-V is security by default,” claims Don Barnetson, co-founder of Hex Five Security, of the company’s MultiZone Security which it has released as a free and open standard. “We’re very excited to enter the Chinese market with such strong partners and expand access to simple, robust security that any developer can implement.”

“The Chinese market will be the first mass adopter of RISC-V,” predicts Dr. Charlie Su, chief technology officer at Andes. “We’re happy to work with Hex Five to provide our customers a simple, robust security implementation that based on our RISC-V cores and comprehensive AndeSight, an Eclipse-based development environment and optimised toolchains to provide leading performance and reduced development time.”

“Increasingly, customers in China see security as a core requirement of their products,” adds Jim Gao, Gowin’s director of solution development. “With MultiZone Security, they can implement a robust security solution on our existing FPGAs without the need for new hardware, deep security expertise or even any changes to their toolset and workflow. This allows a customer to get to market fast, which is the goal of our FPGA solutions.”

The companies have confirmed that they will be demonstrating the MultiZone Security implementation at the Andes RISC-V Con on the 13th of November, while the standard itself is available to download now from GitHub.

The lowRISC project has announced the release of version 0.6 of its open silicon offering, bringing improvements to performance, debugging, and network connectivity – alongside a pledge to add alternative RISC-V cores to the current Rocket option.

Ten months after the release of lowRISC 0.5 brought initial support for Ethernet connectivity, lowRISC’s 0.6 milestone release offers a wealth of improvements. “This release includes an updated version of the Rocket RISC-V core, a higher core clock frequency, JTAG debugging support, Ethernet improvements, and more,” explains developer Alex Bradbury of the project’s progress since January. “We’ve also taken the opportunity to re-organise our documentation, adding an easy to follow quick-start guide.”

From here, the team is looking at shifting away from a pure focus on the Rocket RISC-V processor core by offering an additional choice to its users: “Our next development focus is to add support for dropping in the Ariane RISC-V design, from ETH Zurich,” Alex explains, “as an alternative to Rocket.”

The new getting-started tutorial can be found on the documentation website, while lowRISC 0.6 itself is available from the project GitHub repository.

Chinese electronics company Sipeed has launched a crowdfunding campaign for a range of development boards based on a Kendryte K210 dual-core 64-bit RISC-V processor, aiming to bring artificial intelligence (AI) processing to edge devices and with pricing starting at $5 per board.

“Sipeed MAIX is the first RV64 AI board for edge computing,” the boards’ creator Sipeed explains. “It makes AI embedded to any IoT [Internet of Things] device possible. MAIX [boards] have tons of exciting features: dual-core RV64 IMAFDC, 8MB SRAM, Neural Network Processor (0.25~0.5 TOPS, supports TensorFlow Lite), APU [Audio Processing Unit], hardcore FFT [Fast Fourier Transform]… All this is in square inch, 0.3W, from $5!”

The board range, aimed at both developers and hobbyists, starts with the compact MAIX Bit at $5 early-bird, $6 standard, and $8 once the campaign has completed. Accessories include a microphone array, binocular camera, docking board, and a bundle with on-board camera. A larger development board, the MAIX GO Suite, offers additional features at $22, with an all-in-one bundle available for $45.

The company is also selling the MAIX-I Module, a castellated PCB with the processor and supporting hardware on board for use in embedded projects on custom motherboards. The modules are available with or without Wi-Fi connectivity, priced at $55 or $65 respectively.

More information on the board options can be found on the project’s Indiegogo page.

The Linux Foundation has announced the launch of a new sub-organisation, the GraphQL Foundation, to support the application programming interface (API) query language developed by Facebook and released under an open-source licence in 2015.

“As one of GraphQL’s co-creators, I’ve been amazed and proud to see it grow in adoption since its open sourcing,” explains Lee Byron, who worked on the Facebook team developing the tool for internal use back in 2012. “Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support.”

“We are thrilled to welcome the GraphQL Foundation into the Linux Foundation,” adds Jim Zemlin, executive director at the Linux Foundation, of the new group’s formation. “This advancement is important because it allows for long-term support and accelerated growth of this essential and groundbreaking technology that is changing the approach to API design for cloud-connected applications in any language.”

Designed as an alternative to REST-based APIs, GraphQL is built with flexibility in mind: users can query precise data from multiple cloud-based sources using fewer lines of code, greater performance, and improved security – all of which holds considerable promise for replacing older data communication methods for the Internet of Things (IoT) and other embedded spheres. GraphQL itself has already entered production at companies including its creator Facebook, Netflix, Twitter, Pinterest, GitHub, Audi, and Atlassian, scaling to hundreds of billions of API calls.

“GraphQL has redefined how developers work with APIs and client-server interactions,” claims the Linux Foundation’s Chris Aniszczyk, vice president of developer relations. “we look forward to working with the GraphQL community to become an independent foundation, draft their governance and continue to foster the growth and adoption of GraphQL.”

Full details on the GraphQL Foundation can be found on the Foundation website, while GraphQL itself is available from the official website.

Intel has officially launched its Neural Compute Stick 2, a low-cost plug-and-play USB-connected deep-learning accelerator based on the company’s Movidius Myriad X vision processing unit (VPU) and boasting an eightfold performance improvement over the previous generation.

Based on the same design as its predecessor, the Neural Compute Stick, Intel’s Neural Compute Stick 2 swaps out the Movidius Myriad vision processing unit (VPU) for its successor the Myriad X – a chip which is claimed to boost performance eightfold for deep neural network (DNN) processing. Internally, the device features 16 programmable streaming hybrid architecture vector engine (SHAVE) cores with a high-throughput memory fabric interconnect.

Intel is supporting the device, as with its previous releases, via a distribution of the OpenVINO toolkit which it claims speeds the development of computer vision applications through the inclusion of pre-trained models, optimised algorithms, and sample code. The company is also claiming that its application programming interface (API) offers “write once, deploy everywhere” development, supporting use on the Neural Compute Stick 2, its predecessor, and non-VPU hardware including CPUs, graphics processors, and field-programmable gate arrays (FPGAs).

More information on the Neural Compute Stick 2 can be found on the official website.