Reflections on ORConf 2016

We had the chance again this year to attend the really excellent ORConf 2016 (see here for a write up on ORConf 2015). The focus of the conference is on Open Source silicon in general, comprising of aspects of open source hardware design tooling, open source processor designs and open source SoC designs. This area (and community) is very interesting and it has the potential to have a signficant impact on future cloud systems – here are some reflections on the event.

While the conference addressed many different aspects of the digital design space, there was a significant emphasis on the embedded space and/or IoT type use cases: this can probably be attributed to the fact that these systems are somehow easier to design and produce in small quantities as well as the fact that there is a large opportunity in this area. It was noteworthy how quickly the community is evolving with designs presented at ORConf from last year very likely to manifest in working silicon at next year’s conference. It was also noteworthy how the community is squeezing more and more compute performance out of each Watt in their designs.

Regarding larger systems, there were a few interesting contributions. The RISC-V guys from Berkeley gave an update on the RISC-V foundation and the community around it: clearly, the community has been gathering steam with more regular meetings and the foundation already has a very healthy membership with many of big players participating (nvidia, IBM, Google, Qualcomm, Mellanox, HP Enterprise and others – more detail here). Although the status of the foundation was reasonably clear from the conference presentations, there was not much information provided on the success stories from design and test of RISC-V solutions and (from our selfish perspective) how the world of manycore RISC-V processors – which are more suited to a cloud context – is evolving. Following this, they presented some of their work on predictive branching, but had more emphasis on the use of Chisel in their design and the benefits of the new Intermediate Representation RTL they used (FIRRTL).

The OpenPiton guys from Princeton described their 25 core, 460m transistor chip based on OpenSparc T1 cores which is the among the largest chips ever to be developed within an academic institution. They believe the design can scale up to 65k cores. They are still testing this chip to determine how well it works in different contexts, but it is already remarkable that an academic institution can produce such a complex chip.

Another contribution relating to the data centre context was a presentation from the Ecoscale project which identified how FPGAs can deliver improved performance for certain types of workloads in the HPC context. While their results were interesting, similar results have been shown previously, eg Microsoft have shown how FPGAs can improve performance of bing.

Perhaps the most interesting point which was discussed at the conference was the recent news by the great guys at Adapteva on the new processor they have designed which sports a tidy 1024 64-bit cores and they have sent it to foundry all for a cool $1m. It is based on their Epiphany core which was the basis of their earlier product and was shown to have very impressive energy efficiency – based on this, it’s likely that the energy efficiency of the new chip will also be impressive.

Contrast this with the standard approach to designing chips at volume for DCs. It costs Intel an estimated $8.5bn to build a fab which can produce its latest generation chipsets along with a further $2bn of R&D costs; alternatively, it takes an estimated $30m to produce a 28nm chip and $270 to produce a 7nm chip. Although Intel has increasingly demonstrated an ability to produce custom designs for large customers – specifically in the DC context (Amazon, Oracle, etc) – such customizations typically involve additions to the base Intel design which only provides limited flexibility. The Adapteva story shows how these economics are changing dramatically. Further, Adapteva still leverages their own proprietary core – it is not difficult to envisage that these costs could be reduced further with more sharing of designs in the True Spirit Of Open Source™. These contrasting stories from opposite ends of the processor design spectrum point at a period of significant change in the processor space which will impact all areas of computing.

How will all of this impact the cloud specifically? It is highly likely that the large cloud providers will experiment with these new platforms – they have the expertise and the interest. Of course, any new solutions would be unlikely to impact their IaaS offerings as these are so closely coupled to Intel processors, but clearly they could underpin growing PaaS offerings where there is a higher level of abstraction. The HPC world will also engage with these new designs as it has very specific workloads, it likes to tinker and also has a high level of expertise. Whether such solutions will impact enterprise cloud computing is less clear: the enterprise view of the cloud is still very IaaS based and it will be some time before this changes; also it is less clear what drivers could cause the enterprise world to adopt new processor designs. In any case, one would have to bet on heterogeneity being a key characteristic of the cloud as it evolves.


Leave a Reply

Your email address will not be published. Required fields are marked *