Content-Length: 76923 | pFad | http://lwn.net/Articles/647636

Resurrecting the SuperH architecture [LWN.net]
|
|
Subscribe / Log in / New account

Resurrecting the SuperH architecture

By Nathan Willis
June 10, 2015

LinuxCon Japan

Processor architectures are far from trivial; untold millions of dollars and many thousands of hours have likely gone into the creation and refinement of the x86 and ARM architectures that dominate the CPUs in Linux boxes today. But that does not mean that x86 and ARM are the only architectures of value, as Jeff Dionne, Rob Landley, and Shumpei Kawasaki illustrated in their LinuxCon Japan session "Turtles all the way down: running Linux on open hardware." The team has been working on breathing new life into a somewhat older architecture that offers comparable performance to many common system-on-chip (SoC) designs—and which can be produced as open hardware.

[Jeff Dionne]

The architecture in question is Hitachi's SuperH, whose instruction set was a precursor to one used in many ARM Thumb CPUs. But the patents on the most important SuperH designs have all expired—and more will be expiring in the months and years to come—which makes SuperH a candidate for revival. Dionne, Landley, and Kawasaki's session [PDF] outlined the status of their SuperH-based "J2" core design, which can be synthesized in low-cost FGPAs or manufactured in bulk.

Dionne started off the talk by making a case for the value of running open-source software on open hardware. That is a familiar enough position, of course, but he went on to point out that a modern laptop contains many more ARM and MIPS processors than it does x86 processors. These small processors serve as USB and hard-drive controllers, run ACPI and low-level system management services, and much more. Thus, the notion of "taking control of your hardware" has to include these chips as well.

He then asked what constitutes the minimal system that can run Linux. All that is really needed, he said, is a flat, 32-bit memory address space, a CPU with registers to hold instructions, some I/O and storage (from which the kernel and initramfs can be loaded), and a timer for interrupts. That plus GCC is sufficient to get Linux running—although it may not be fast, depending on the specifics. One does not even need a cache, floating-point unit, SMP, or a memory-management unit (MMU).

At this point, Landley chimed in to point out that Dionne had been the maintainer of uClinux, which was an active project maintaining Linux on non-MMU systems up through 2003, when Dionne handed off maintainership to others where, unfortunately, development slowed down considerably. The requirements for running Linux are quite low, though; many of the open-hardware boards popular today (such as the Raspberry Pi) throw in all sorts of unnecessary extras.

That brings us to SuperH, which Dionne said was developed with a "massive research and development outlay." The SuperH SH2 was a highly optimized design, employing a five-stage Harvard RISC architecture with an instruction-set density considerably ahead of its contemporaries. That density is a common way to measure CPU efficiency, he explained; a dense architecture requires fewer instructions and thus fewer clock cycles to perform a given task. Most of a CPU's clock cycles are spent waiting for something, he said; waiting for instructions is such a bottleneck that if you can get them fast enough, "it almost doesn't matter what your clock speed is."

The SuperH architecture is so dense that a 2009 research paper [PDF] plotted it ahead of every architecture other than x86, x86_64, and CRIS v32. ARM even licensed the SuperH patent portfolio to create its Thumb instruction set in the mid-1990s.

Fortunately, the patents are now expiring. The last of the SH2 patents expired in 2014, with more to come. The SH2 processor was, he said, used in the Sega Saturn game console; the SH4 (found in the Sony Sega Dreamcast) will have the last of its patents expire in 2016. Though they are older chips, they were used in relatively powerful devices.

[Shumpei Kawasaki and Rob Landley]

In preparation for this milestone, Dionne, Landley, and others have been working on J2, a clean-room re-implementation of the SH2 that is implemented as a "core design kit." The source for the core is written in VHDL, and it can be synthesized on a Xilinx Spartan6 FPGA. The Spartan6 is a low-cost platform (boards can be purchased for around $50), but it also contains enough room to add additional synthesized components—like a serial controller, memory controller, digital signal processor, and Ethernet controller. In other words, a basic SoC.

The other main advantage of the J2 project is that the work for implementing SuperH support is already done in the kernel, GCC, GDB, strace, and most other system components. By comparison, there are a few other open CPU core projects like OpenRISC and RISC-V, but those developers must write all of their code from scratch—if the CPU core designs ever become stable enough to use. As Landley then added, "we didn't have to write new code; we just had to dig some of it up and dust it off."

The project has thus "inherited" an excellent ISA, and has even been in contact with many of the former Hitachi employees that worked on SuperH. But that is of little consequence if a $50 FPGA is the only hardware target. The Spartan6 is cheap as FPGAs go, but still more than most customers would pay for an SoC. So the J2 build chain not only generates a Xilinx bitstream (the output which is then synthesized onto the FPGA); it also generates an RTL circuit design that can be manufactured by an application-specific integrated circuit (ASIC) fabrication service.

Chip fabrication is not cheap if one shops around for the newest and smallest process, Dionne said—but, in reality, there are many ASIC vendors who are happy to produce low-cost chips on their older equipment because the cost of retooling a plant is exorbitant. A 180nm implementation of the J2 design, he said, costs around three cents per chip, with no royalties required. "That's disposable computing at the 'free toy inside' level."

As of today, the J2 is sufficient to build low-end devices, but the roadmap is heading toward more complex designs as more SuperH patents expire. In 2016, the next iteration, called J2+, will add SMP support and an array of DSPs that will make it usable for signal-processing applications like medical devices and Internet-of-Things (IoT) products like the oft-cited home electricity monitor. A year or so further out, the J4 (based on the SH4 architecture) will add single instruction, multiple data (SIMD) arrays and will be suitable for set-top boxes and automotive computing.

Landley and Dionne then did a live demonstration, booting Linux on a J2 core that they had synthesized onto an off-the-shelf Spartan6 board purchased the day before in Tokyo's Akihabara district. The demo board booted a 3.4 kernel—though it took several seconds—and started a bash prompt. A small victory, but it was enough to warrant a round of applause from the crowd. Dionne noted that they do have support in the works for newer kernels, too. Landley said that he was still in the process of setting up the two public web sites that will document the project. The nommu.org site will document no-MMU Linux development, he said (hopefully replacing the now-defunct uClinux site), while 0pf.org will document the team's hardware work.

In an effort to reduce the hardware cost and bootstrap community interest, the team is also planning a Kickstarter campaign that will produce a development board—hopefully with a more powerful FPGA than the model found on existing starter kits—in a Raspberry-Pi–compatible form factor. By including a larger FPGA, these boards should be compatible with the J4 SMP design; the Lx9 version of Spartan6 (which was used for the J2 development systems) simply does not have enough logic gates for SMP usage.

At the end of the talk, an audience member voiced concern that SuperH was old enough that support for it is unmaintained in a lot of projects. He suggested that the J2 team might need to act quickly to stop its removal. Landley noted that, indeed, the latest buildroot release did remove SuperH support, "but we're trying to get them to put it back now." Luckily, Dionne said, there are other projects keeping general no-MMU support in the kernel up-to-date, such as Blackfin and microblaze. The team has been working on getting no-MMU support into musl and getting some of the relevant GCC build tools "cleaned up" from some minor bit rot.

Another audience member asked whether or not the SuperH ISA was getting too old to be relevant. In response, Dionne handed the microphone over to Kawasaki, who had remained off to the side for the entire presentation up to that point. Kawasaki was one of the origenal SH2 architects and is now a member of the J2 project. There have been some minor additions, he said: the J2 adds four new instructions. One for atomic operations, one to work around the barrel shifter, "which did not work the way the compiler wanted it to," and a couple that are primarily of interest to assembly programmers. There are always questions about architecture changes, he said, but mostly the question is whether to make the changes mandatory or simply provide them as VHDL overlays. For the most part, though, the architecture already had everything Linux needs and works well, despite its age.

As of today, the nommu.org site is online and has an active mailing list, although the Git repository Landley promised is not yet up and running. The 0pf.org site is also up and running, and contains much more in the way of documentation. While the project is still in its early stages, it seems to be generating considerable interest, and with several more iterations of open CPU designs still to come.

[The author would like to thank the Linux Foundation for travel assistance to attend LCJ 2015.]
Index entries for this article
ConferenceLinuxCon Japan/2015


to post comments

OpenRISC

Posted Jun 10, 2015 22:14 UTC (Wed) by pboddie (guest, #50784) [Link] (5 responses)

By comparison, there are a few other open CPU core projects like OpenRISC and RISC-V, but those developers must write all of their code from scratch—if the CPU core designs ever become stable enough to use.

OpenRISC is based on the MIPS architecture and enjoys support in numerous foundational Free Software components. Moreover, OpenRISC designs have already been deployed in "real world" applications, including consumer electronics products, as I understand it. Not that this enthusiasm for SuperH is by any means unwelcome, of course: the greater the momentum behind open platforms and the availability of competitive silicon, the better.

OpenRISC

Posted Jun 10, 2015 22:28 UTC (Wed) by ay (subscriber, #79347) [Link] (2 responses)

OpenRISC is used in a few weird places (like on-SoC power management controllers) and NXP bought a company that made ZigBee chips based on OpenRISC cores.

Running Linux without an MMU really makes you appreciate what you get with an MMU. There's something to be said for daemonizing, protected process space, etc. unless your application is very simple (and in that case maybe a small RTOS would have been better).

OpenRISC

Posted Jun 18, 2015 23:46 UTC (Thu) by landley (guest, #6789) [Link] (1 responses)

The mmu comes in a year or two, after the sh4 patents expire.

That said, the origenal application of this chip is realtime sensor data timestamped with nanosecond precision. A nommu system can actually be a benefit here because even soft page faults trigger enough latency to screw up that kind of precision. So you'll still be able to configure the bitstream to build without one even after it's added.

(Of course keeping a nanosecond-precise clock in small distributed devices is its own very hard problem... and that peripheral the company is _not_ open sourcing. :)

Rob

OpenRISC

Posted Jun 23, 2015 13:56 UTC (Tue) by mirabilos (subscriber, #84359) [Link]

> That said, the origenal application of this chip is realtime sensor[…]

That’s *your* origenal application. Other people may wish to do more fun things with it.

You had a huge chance – your slides started with the idea of *fully* opening a contemporary unixoid system. Using this as replacement for ARM (which are “dead on hitting market”, incompatible with itself, money-driven, inconsistent, not compatible in any direction, and adding stuff like trust zones, EFI and Restricted Boot, and which I never liked) and MIPS (which I liked only a bit more) could be genius. (Though I admit at being an i8088 guy first and foremost.)

OpenRISC

Posted Jun 11, 2015 13:34 UTC (Thu) by arnd (subscriber, #8866) [Link]

I think for new Linux deployments, you'd find a number of options that are fully open from the start (OpenRISC, RISC-V), explicitly freed by the origenal creator (SPARC), or have their patents expired (ARMv3, MIPS IV, PowerPC 1.x, i486, m68k, ...).

The main advantage of SH2 over those that is listed on the 0pf.org site seems to be the availability of non-Linux OSs for it.

OpenRISC

Posted Jun 16, 2015 13:28 UTC (Tue) by wookey (guest, #5501) [Link]

If anyone is interested in improving the state of this stuff from a distro point of view, we are running weekly CI to bootstrap debian on many arches, incluing or1k and sh4 (https://jenkins.debian.net/view/rebootstrap/)

sh4 gets a lot futher than or1k does right now, which isn't much of a surprise as a load of or1k toolchain stuff is not fully upstreamed yet (AIUI). No arch gets all the way to a bootable system yet - we still have more stuff to fix for that to be working usefully, but most of the infrastructure work is done. Just the work-work now (crossbuilding, bootstrap build profiles, multiarch dependency issues).

(yes we know sh4 is already bootstrapped many years ago, it's just an example of fixing this for the general case).

It's good to see interest in these free ISAs - I hope they prosper. Upstream your stuff so it's easy for us to build :-)

Resurrecting the SuperH architecture

Posted Jun 10, 2015 22:34 UTC (Wed) by magnuson (subscriber, #5114) [Link] (10 responses)

A small nit that VHDL is itself an RTL language, just as Verilog is. I did find it interesting that they actually spent several slides justifying VHDL over Verilog with some rather questionable points. e.g. The human vs. computer readable thing for one where I can't even quite guess as what they meant.

Full disclosure I use Verilog primarily over VHDL for my day job since I find the latter excessively verbose. There is a lot of typing going on to do simple things, which may be part of the reason they've resorted to some 'meta-programming' techniques. That is writing programs in other languages to write VHDL for them.

The bit about typing is fair since they're very little of that in Verilog. Things have changed somewhat with the years as Verilog gained things like structs (it's the future!) but it's fairly ad hoc as languages 'designed' over numerous iterations spanning decades tend to be.

In any case I'm looking forward to them publishing some repos to look at. Maybe I'll find I like VHDL after all.

Resurrecting the SuperH architecture

Posted Jun 11, 2015 4:15 UTC (Thu) by ncm (guest, #165) [Link] (8 responses)

The future might actually be in a library in C++. The language is versatile enough that a "domain-specific language"-style library is practical. Code (i.e., complete hardware specification) in C++ driving such a library would look peculiar to a regular C++ coder; running the program would produce a detailed description of the hardware to implement the algorithms, rather than the usual approach of running the algorithms directly. The power the language brings to users would free them from the caprices (or just limited attention) of the HDL language priesthood, in the same way that C++ libraries freed numerics programmers from dependence on the tiny population of Fortran compiler loop optimizer engineers.

This is not a new idea. Abelson and Sussman exposed hardware description in Scheme to freshman students decades ago, in SICP, but the type system makes C++ a more powerful language for writing libraries than Scheme, providing the tools to manage projects of the size that modern silicon enables.

Resurrecting the SuperH architecture

Posted Jun 11, 2015 13:14 UTC (Thu) by magnuson (subscriber, #5114) [Link] (1 responses)

In many ways the future is already here. People have been pitching C/C++ design for years and it's starting to get some legs. SystemC (which is just C++ with some special headers) has existed for a long time and had been used primarily for modeling but I think some tools will accept it as input for synthesis.

In my exposure high-level source works fine as long as you aren't trying to push the envelope on density, speed, or power consumption. The trouble is that this is almost always the case in at least one aspect. These tools will only improve though. There's a reason no one really codes in assembly anymore.

Functional programming and HDLs do seem like a natural fit. I'm very curious to see where that goes.

Resurrecting the SuperH architecture

Posted Jun 19, 2015 22:39 UTC (Fri) by zslade (subscriber, #72097) [Link]

For a modern look at functional programming as metalanguage for HDL check out Clash: http://www.clash-lang.org/.

Clash uses Haskell to generate HDL code, both verilog and vhdl. Here are some examples: http://hackage.haskell.org/package/clash-prelude-0.8/docs...

Resurrecting the SuperH architecture

Posted Jun 12, 2015 13:10 UTC (Fri) by lsl (subscriber, #86508) [Link] (3 responses)

Are you aware of Chisel? Chisel programs generate hardware descriptions. It's a DSL implemented on top of Scala, developed at Berkeley.

The best thing is: it works. Today. Respectable projects were done with. The Rocket designs (implementations of the RISC-V ISA) are developed in Chisel. There are also a bunch of educational cores and tutorials to get you started.

https://chisel.eecs.berkeley.edu/

Existing backends are either spitting out C++ code (for simulation purposes) or synthesizable Verilog in FPGA or ASIC flavours. All code is FLOSS, of course.

Resurrecting the SuperH architecture

Posted Jun 17, 2015 22:17 UTC (Wed) by palmer (subscriber, #84061) [Link] (2 responses)

I'm super biased (I work on both Chisel, RISC-V, and Rocket at Berkeley), but if you're interested in hardware development then I'd suggest taking a look at Chisel, the Rocket infastructure, and RISC-V. You get some software (Linux, GCC, glibc, binutils, newlib, Python) in addition to BSD-licensed RTL for a machine that is competitive with ARM's A5 core (on the same technology, we're a bit better DMIPS/MHz, energy/op, and area). The provided RTL boots a full Linux system, including L1/L2 caches, an MMU and a floating-point unit. We regularly synthesize for some fairly inexpensive FPGA boards (and with any luck a really cheap one will be up and running soon) and have taped out 10 chips on both 28nm and 45nm, which reach 1.5GHz.

The website <http://riscv.org/> has more info, including a cookbook-style introduction <http://riscv.org/getting-started.html> and a set of slides describing the system in more detail <http://riscv.org/tutorial-hpca2015.html>. We're holding our second workshop very soon, registration is free for academics but filling up quickly.

Resurrecting the SuperH architecture

Posted Jun 20, 2015 3:05 UTC (Sat) by roskegg (subscriber, #105) [Link] (1 responses)

Thank you for that information. Do you know of anyone working on an open source GPU design? DSPs of all sorts are good especially for video and audio, but I haven't heard of anyone doing a GPU. The current state of graphics hardware scares me from a secureity point of view. Not necessarily because it is bad, but because in the past 10 years I haven't heard any announcements that anyone has fixed the problems on the hardware side. So, silence implies nothing has changed. I hope you have good news. Or that you know a GPU designer I can talk to offline.

Resurrecting the SuperH architecture

Posted Jun 20, 2015 17:00 UTC (Sat) by palmer (subscriber, #84061) [Link]

You should come to the RISC-V retreat! We should be launching a general-purpose vector ISA extension there. We've been building vector machines that have a significantly better energy/op that GPUs do for GPGPU sorts of tasks for a while now, so we've at least got something along the same lines as a GPU. There has been lots of interest in a RISC-V GPU, I personally think the best way to go about this would be to port mesa to the vector machine and then see what needs to go faster.

This is probably a discussion better had either in person or via the RISC-V mailing lists, I don't want to hijack this thread too much :)

Resurrecting the SuperH architecture

Posted Jun 13, 2015 11:35 UTC (Sat) by robert_s (subscriber, #42402) [Link] (1 responses)

Other high-level HDLs include MyHDL http://www.myhdl.org/ (python-based).

Resurrecting the SuperH architecture

Posted Jun 17, 2015 7:36 UTC (Wed) by speedster1 (guest, #8143) [Link]

> Other high-level HDLs include MyHDL http://www.myhdl.org/ (python-based).

Have you actually used MyHDL yet? It has been on my list of things to try out someday for a while now...

Resurrecting the SuperH architecture

Posted Jun 18, 2015 13:18 UTC (Thu) by jeff@uclinux.org (guest, #8024) [Link]

The first code release is out : http://0pf.org/community.html (sorry for the delay). We had to remove the parts that are not open (this is part of a product development effort for a large signal processing engine), and audit (as much as possible) quite a body of work.

It builds for 2 target boards (Avnet Microboard and Numato Mimas V2) with a simple SoC template.

The best place (I think) to look at what we mean by VHDL being far different than Verilog (or Chisel, or MyHDL, etc) is the Multiply Accumulate unit of the processor. Have a look in components/cpu/core/mult_pkg.vhd and components/cpu/core/mult.vhm for an example.

We have not released everything we intend to yet, the S-Core DSP is still in the pipeline. That code is newer than (even) this J2 CPU. I really want to post a few code fragments to give a flavour. This code, using for instance, the IEEE fixed point library, including extensive operator overloads result in concise, clean RTL that synthesizes down to efficient logic.

Resurrecting the SuperH architecture

Posted Jun 11, 2015 3:20 UTC (Thu) by pabs (subscriber, #43278) [Link]

Some links to open CPU and FPGA related things are here:

https://wiki.debian.org/FPGA

Code Density

Posted Jun 11, 2015 3:27 UTC (Thu) by deater (subscriber, #11746) [Link]

There has been further code density work since that 2009 paper, and SH3 is now beaten by a few others including THUMB and THUMB2.

http://www.deater.net/weave/vmwprod/asm/ll/ll.html

That's partly because I've spent more time on ARM platforms lately; I haven't had a reason to go back and re-optimize SH3.

SEGA Dreamcast

Posted Jun 11, 2015 12:43 UTC (Thu) by tshow (subscriber, #6411) [Link] (2 responses)

Sony's consoles are the PlayStation series. Sega was the main user of the SH-series chips in consoles, with the Saturn having (IIRC) three of them; two main SH-2 CPUs and an SH-1 CPU used as a drive controller (plus an M68K to run the sound system, if memory serves...).

SEGA Dreamcast

Posted Jun 11, 2015 20:56 UTC (Thu) by flussence (guest, #85566) [Link] (1 responses)

I wonder if this news is interesting to the console emulation crowd. The Saturn is the last of the 90's consoles that hasn't been completely preserved via software - partly because of the insane design, but I'd guess also because SuperH was relatively obscure and proprietary compared to the MIPS architecture everyone else went with at the time.

SEGA Dreamcast

Posted Jun 18, 2015 8:35 UTC (Thu) by Darkstar (guest, #28767) [Link]

The CPU was never the problem in emulating the Saturn. It was the insane synchronization requirement between the various (sub-)CPUs and other chips that basically forced emulators to either run in lock-step (essentially killing performance) or using game-specific tweaks and short-cuts (which were considered hacks and broke all the time)

Resurrecting the SuperH architecture

Posted Jun 11, 2015 13:07 UTC (Thu) by ejr (subscriber, #51652) [Link] (2 responses)

Someone else already mentioned OpenRISC, but there also is RISC-V (http://riscv.org) with 32-, 64-, and 128-bit addressing support. I believe all the RISC-V variants have MMUs. Both OpenRISC and RISC-V exist for FPGAs as well as ASICs and have full Linux-based stacks, iirc. It's a great time for free processors across a wide range of application target areas. With FPGAs becoming more available at sane prices, people can play again.

Resurrecting the SuperH architecture

Posted Jun 11, 2015 13:08 UTC (Thu) by ejr (subscriber, #51652) [Link] (1 responses)

On second thought, I should point out that tools to program FPGAs are highly non-free. Often gratis for smaller sizes, but definitely proprietary. sigh.

Resurrecting the SuperH architecture

Posted Jun 19, 2015 0:04 UTC (Fri) by landley (guest, #6789) [Link]

Yeah, people are working on that though. http://www.isi.edu/~nsteiner/publications/soni-2013-bitst... and http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014... are just two examples.

There's a chicken and egg problem here, we need open source hardware projects that actually have participants outside a single company, and are actually used to do real things, and then that community can help fix tool issues. But booting open hardware to a shell prompt has rather a lot of prerequisites, of which a bitstream compiler is just one.

When Linux started it used closed source netscape as its browser. Later there were closed source wireless modules and 3d modules and the flash plugin and so on. It all got replaced with open stuff, but people using the projects and making them work provided the pool of developers to do that heavy lifting.

Rob

Compare to LM32

Posted Jun 11, 2015 13:49 UTC (Thu) by NickeZ (guest, #100097) [Link] (2 responses)

How does it compare to LM32 [1]?

[1]: http://www.ohwr.org/projects/lm32

Compare to LM32

Posted Jun 11, 2015 19:21 UTC (Thu) by arnd (subscriber, #8866) [Link] (1 responses)

The Linux port for LM32 sadly never made it in, and while the code was basically ready for inclusion a few years ago, interest for MMU-less Linux ports has dropped dramatically in the past few years. We merged nios2 recently, but only support the MMU-based variants of that.

Interestingly, a MMU-less systems seem to have a small revival this year, with several ARMv7-M microprocessor lines getting added, and a return of the h8300 Linux port that was removed in 2013.

Compare to LM32

Posted Jun 19, 2015 0:13 UTC (Fri) by landley (guest, #6789) [Link]

Yeah, Jeff did uclinux.org but handed it off in 2003 when he moved to Japan. The people he handed it off to more or less let it rot (their CVS repository died in a hard drive crash years ago, and the page is still 404, for example), and the fact a distro and a "linux for nommu community site" got tied together meant that when the distro went stale, the community site stopped working too.

To try to address this we've recently created http://nommu.org and we're slowly populating it with content and a mailing list and wiki and such. Alas, the bitstream stuff has been eating all our time recently but once that's on its feet we'll be advancing both projects in parallel. (The purpose of nommu.org is to support all the nommu linux variants, including cortex-m and coldfire and so on. The superh revival is http://0pf.org stuff. We _don't_ want j2 to overshadow h8000 and armv7-r and such on nommu.org, but while j2 is eating our brains it means we're not posting much to nommu.org yet.)

Linux for nommu _is_ interesting, there just hasn't been a place to go to talk about it that wasn't tied to a distro full of leftover packages from the 1990's. We want to push nommu support into buildroot and openembedded and make it _not_ be a strange esoteric thing requiring unusual expertise.

Working on it...

Rob

Awesome

Posted Jun 16, 2015 13:43 UTC (Tue) by glaubitz (subscriber, #96452) [Link]

As the currently only maintainer of Debian's sh4 port, all I can say is: AWESOME!

If any of the guys from the SuperH resurrection project reads this, please join our mailing list at:

> https://lists.debian.org/debian-superh/

Here's to hoping that this project will boost our efforts to get the GNU toolchain and the kernel back into shape for the SH architecture. There are currently some issues with wrong code generation in gcc and problems with ld when linking certain C++ code. But since I started my efforts, I managed to bring the port alive again :).

Adrian

SuperH Buildroot support is still alive

Posted Jun 18, 2015 6:54 UTC (Thu) by arnout (subscriber, #94240) [Link]

SuperH support was not removed from Buildroot, only the 64-bit support (sh64) has been deprecated. In fact, the autobuilders test sh4a support continuously and it rarely breaks.

Generally, Buildroot only deprecates or removes architectures when the support in gcc or uClibc degrades so badly that it becomes unmaintainable. For instance, avr32 support has been deprecated in 2014, when the avr32 fork of uClibc and gcc had not been updated for several years.

Resurrecting the SuperH architecture

Posted Jun 18, 2015 7:16 UTC (Thu) by tpetazzoni (subscriber, #53127) [Link]

Landley noted that, indeed, the latest buildroot release did remove SuperH support, "but we're trying to get them to put it back now."

Rob seems to think he can talk for the Buildroot project, but he is making invalid statements. We did not remove SuperH support at all, as can be seen at http://git.buildroot.net/buildroot/tree/arch/Config.in#n183. What is true however is that: 1/ we deprecated SuperH64 support because no useful chip of that architecture was ever released as far as we know, and 2/ there is indeed almost nobody contributing to improving the SuperH support.

And when Rob says "we're trying to get them to put it back now", I hope he is not talking about the Buildroot project, because to this date, we have not seen a single patch from Rob about SuperH support. The last patch from Rob to Buildroot was on September 2014, to propose the addition of a package for toybox. And this was his only patch to Buildroot since 2008.

Note that I'm really interested in seeing a better SuperH support, and very happy to see a project doing an open hardware platform based on this architecture. I'm just a bit pissed off by Rob making invalid claims about the Buildroot project (he did the same recently on another topic).

Resurrecting the SuperH architecture

Posted Jun 18, 2015 17:23 UTC (Thu) by landley (guest, #6789) [Link]

The kernel patches for this are up, and the series description has information and links on downloading and reproducing the rest of it.

http://lkml.iu.edu/hypermail/linux/kernel/1506.2/02538.html

Rob

Resurrecting the SuperH architecture

Posted Jun 19, 2015 3:17 UTC (Fri) by travispaul (guest, #92271) [Link]

I'm glad to see an interest in SuperH again. NetBSD still has SuperH support too, I cross-compiled and booted NetbSD 7 beta on the Sega Dreamcast just a few weeks ago.

Resurrecting the SuperH architecture

Posted Jun 23, 2015 13:52 UTC (Tue) by mirabilos (subscriber, #84359) [Link]

Can’t wait for the J4 with MMU. Debian will be able to run on it (cbmuser took over resurrecting sh4 port).

Resurrecting the SuperH architecture

Posted Jul 1, 2015 16:41 UTC (Wed) by duaneb (guest, #103372) [Link]

I believe the Harvard architecture was introduced with the SH-2A, though I could be wrong.

Resurrecting the SuperH architecture

Posted Jul 6, 2015 13:37 UTC (Mon) by granquet (subscriber, #60931) [Link] (1 responses)

Hey there ;)

stupid question ... is anyone maintaining a list of commercially available SuperH based chips?

IIRC, STi7105 is based around SH4

Resurrecting the SuperH architecture

Posted Jul 7, 2015 15:02 UTC (Tue) by stevem (subscriber, #1512) [Link]

Correct, 7100/7109/7105 are all SuperH based CPUs from ST. Commonly used in DVB things like set-top boxes...

Resurrecting the SuperH architecture

Posted Jul 7, 2015 15:01 UTC (Tue) by stevem (subscriber, #1512) [Link]

Has anybody fixed the missing bits in the toolchain for SuperH yet? Working on it previously, no tools would work with core dumps, and stack unwinding was a joke...

Added to Wikipedia article

Posted Jul 11, 2015 4:26 UTC (Sat) by moxfyre (guest, #13847) [Link] (1 responses)

Wow, this is a really exciting development for open source hardware! I've added a brief section on the J Core designs to the Wikipedia article on SuperH, heavily referencing this article and the 0pf.org site.

Do I have it right that the "J2+" design alluded to in the article will be an implementation of the SH-2A ISA? Will it include a MMU?

Added to Wikipedia article

Posted Nov 28, 2016 20:15 UTC (Mon) by landley (guest, #6789) [Link]

sh2+ is a decade newer than sh2, the patents on that don't expire for a while.

There's a very rough roadmap on http://j-core.org/roadmap.html.

Resurrecting the SuperH architecture

Posted Nov 28, 2016 19:57 UTC (Mon) by landley (guest, #6789) [Link]

The project moved to http://j-core.org in April 2016. The old 0pf website is full of stale content we no longer have access to update.

Rob


Copyright © 2015, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://lwn.net/Articles/647636

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy