hero-banner-for-first-winne.png

Congratulations to Hammond Pearce at New York University for winning the first-place prize in the Efabless AI Generated Design Contest!

Welcome to the Q&A with Dr. Hammond Pearce, the first-place winner of the Efabless AI Generated Design Contest. Hammond will be discussing his work on the QTCore-C1, the challenges he faced, and what he learned from the experience.

What is the nature of your design?

Hammond: The QTCore-C1 design that we entered is an 8-bit accumulator-based architecture which can act as a kind of predictable co-processor for the main Caravel core. It can do basic mathematical and logic operations, interact with several input/output lines as well as measure time with an internal counter, and can send and receive values and interrupt requests to the main processor. We implemented this design entirely via conversations with OpenAI’s GPT-4, every component and every signal was created with GPT-4’s authorship. We even had GPT-4 patching bugs after they were found during testing, as well as provide insights into the design of the ISA itself.

What did you learn from this experience?

Hammond: The main thing that we learned from our experience in this competition is that AI combined with the Efabless process is a tremendous enabler for producing custom silicon. I never really even considered that a small team like ours could have the capability to produce a whole co-processor and have it turned into a real chip. During this process, we found that the AI tools like GPT-4 are surprisingly capable at interpreting and iterating over design ideas to actually turn them into functional hardware description language. I think that this work demonstrates that we’re on the cusp of something special, where we will soon be able to pair novice engineers with commercial and open-source AI tools and build processes to produce real-world hardware designs. In short, I think that what we showed here is just the start of what we’ll soon be able to do.

Can you talk more about your project that you submitted to the design contest?

Hammond: The QTCore-C1 is a comprehensive 8-bit microarchitecture designed and implemented entirely via GPT-4 conversations. The processor is an accumulator-based Von Neumann design (shared data and instruction memory).
It has 256 bytes of instruction and data memory, as well as 8-bit I/O ports, an internal 16-bit timer, and memory execution protection across 16-byte segments.

qtcore_c1_datapath.png

What was your goal and why did you choose this design?

Hammond: My goal was to implement a microcontroller-like design as a co-processor for Caravel without writing a single line of Verilog. I wanted to include some limited peripherals (I/O and a timer) and the ability to signal the main Caravel processor (with data and interrupt requests). There were a lot of potential directions within this space, in choosing the co-processor’s instruction set architecture (ISA) and choosing various design constraints.

Ultimately, I decided to have GPT-4 co-design the ISA as well as the processor, and so for simplicity stuck with an accumulator-based architecture. I explicitly did not want to use a standardized ISA like MIPS or RISC-V, as there are plenty of examples of processors implementing these online, meaning that GPT-4 might have been able to just reproduce designs it had seen before. Instead, as QTCore-C1 has an entirely novel ISA, GPT-4 must demonstrate capability for generalization and creativity.

Further, I was confident in the ability for GPT-4 to be able to accomplish the code in this space, as I and the rest of the research team had previously performed a more limited benchmarking and processor design trial with the model (see our work at https://arxiv.org/abs/2305.13243)

How did you implement your project? What challenges did you run into and what did you learn?

Hammond: My team and I have had experience with using AI models for writing Verilog since GPT-2 was open-sourced in 2020. The field is continuously evolving at a rapid pace, and so developers must continuously update their understanding of how to best use the latest AI models. The most significant recent paradigm shift came with the family of “instructional”/“conversational” models like ChatGPT. We believe that these models will be key to the transformation of the hardware development process.

As such, we began our experimentation using the most capable broadly-available model, GPT-4. We started by giving it some context of the case studies we had previously designed and began the process of making the new ISA for QTCore-C1. Then, in parallel with this conversation, we started multiple additional ‘threads’ on implementing the various internal components for the datapath, including registers, memory, multiplexers, and so on. Control signals then had to be routed to a control unit which had to be designed such that instructions would be decoded correctly.

We previously had found that ChatGPT and GPT-4 struggle when writing test code for Verilog. Still, it was helpful in providing a program assembler so that I could write assembly programs in the ISA and have them compiled into binaries for simulation. After loading these and simulating the design, a number of bugs were found. This is where it can get quite tricky to steer GPT-4: having it fix a bug in a piece of code, especially if it is a large file, can be extremely difficult, and so instructions can need to get quite pointed at times (change the variable type, change this constant) but sometimes GPT-4 can surprise you and identify a bug when told that something is going wrong.

Still, we observed that GPT-4 continues to be far more capable at writing functional code than it is at writing test code, and so human engineers (me) were needed to write the testbenches for the overall design.

If you had the opportunity to extend your project, what would you do next time?

Hammond: This inaugural challenge was designed to have quite a short turnaround with only a few weeks between announcement and submission deadline, which meant that we focused on a smaller 8-bit architecture. In future I would like to do a more ambitious design which uses up more of the available die space. I am in particular interested in hardware accelerators and am interested in exploring how we could best use GPT-4 in this space, for instance for matrix and floating-point operations. It would be particularly interesting if GPT-4 could implement hardware useful for executing GPT-type models!

“AI combined with the Efabless process is a tremendous enabler for producing custom silicon.”

Dr. Hammond Pearce

Hammond Pearce.jpg

Hammond Pearce

Jason Blocklove.png

Jason Blocklove

Ramesh Karri.jpg

Ramesh Karri

Siddharth Garg.jpg

Siddharth Garg