Bullaki Science Podcast

15. The 4th Wave of Electronic Development | Dr. Mark Rosker (DARPA)

January 21, 2022 Bullaki Season 1 Episode 15
Bullaki Science Podcast
15. The 4th Wave of Electronic Development | Dr. Mark Rosker (DARPA)
Show Notes Transcript

As the director of the Strategic Technology Office at the Defense Advanced Research Projects Agency (or DARPA), Dr Mark Rosker leads the office in development of develop high-performance intelligent microsystems and next-generation components to ensure U.S. dominance in the areas of Command, Control, Communications, Computing, Intelligence, Surveillance, and Reconnaissance (C4ISR), Electronic Warfare (EW), and Directed Energy (DE). The effectiveness, survivability, and lethality of these systems depend critically on the microsystems contained inside. Mark was a program manager in the office from 2003 to 2011, where he developed a portfolio of technical programs in gallium nitride and other compound semiconductor radio-frequency devices, heterogeneous circuit integration, terahertz electronics and quantum cascade lasers.

Here he talks about innovations led by the DARPA’s MTO office, the Electronic Resurgence Initiative and how DARPA is supporting the 4th wave of electronic development, advances beyond the von Neumann Architecture, hardware security, new types of devices that can work at extreme temperatures, and electronics resilience towards EMP attacks.

Quote: “I think you're bounded by your imagination. Of course the rules of physics have something to do with some of these things. But one of the fun things we were talking about being a Program Manager […] is to try and think bigger than you've ever had an opportunity to think before. That GPS […] sounded like science fiction at that time. And then the next generation beyond that, that said, I'm going to reduce that entire rack of equipment, to something that I can put on in my pocket. That also sounded like science fiction. So I don't want to tell you all my good ideas. They all sound like science fiction.”

******

CONNECT: 

- Subscribe to our YouTube channel: https://www.youtube.com/bullaki

- Support on Patreon: https://www.patreon.com/bullaki 

- Spotify: https://open.spotify.com/show/1U2Tnvo1PZY4Fu4QLHURJV 

- Apple Podcast: https://podcasts.apple.com/gb/podcast/bullaki-science-podcast/id1538487175

- LinkedIn: https://www.linkedin.com/in/samuele-lilliu/ 

- Website: www.bullaki.com 

- Minds: https://www.minds.com/bullaki/

- Vimeo: https://vimeo.com/bullaki 

- Odysee: https://odysee.com/@bullaki:0

 

*****

#bullaki#science#podcast#electronics #darpa #mto

Samuele Lilliu (SL). Hi, Mark, how are you?

Mark Rosker (MR). I'm doing fine. Can you hear me? Okay?

SL. Yeah, I can hear very well. Thank you very much for doing this.

MR. Oh, my pleasure.

SL. So DARPA is the primary innovation engine of the DoD, the Department of Defense in the US, it supports high reward and high risk programs, with the aim of creating strategic surprise for adversaries and also preventing strategic surprise to the US. It seems to me that DARPA is after breakthrough discoveries rather than incremental science. 

So how has DARPA’s Microsystems Technology Office contributed to ground-breaking discoveries in the electronics sector in the past 30 years? And, in your opinion, what have been the most important discoveries?

MR. So really good question and very well phrased. As you said, our purpose is to try and look at the world differently to understand what technology areas might lead to surprise or to create them ourselves. It's often said that the best way to avoid technical surprise is to be ahead of the game, create your own surprises. 

If you look at our history, there are a number of examples, since MTO, was founded just about 30 years ago. Some of the ones that come to mind are things like, for example, the MOSIS program [Metal Oxide Silicon Implementation Service]. The MOSIS programs separated design from fabrication of electronics and had a completely revolutionary impact. Today, large companies, anything from an auto company, to a consumer electronics company, anything really use electronics, many of those kinds of companies design electronics, but they don't have the ability to in-house fabricate their own electronics. That was not true when DARPA started the MOSIS program, you literally had to have your own in house capability. That separation of design from fabrication changed markets, and led to a tremendous amount of innovation. So that's one example. 

Another example, there have been a series of programs that have been about looking at the edge of the research space, trying to understand and accelerate innovation in this country, mostly in academia. The latest one happens to be called the JUMP program (Joint University Microelectronics Program). 

But if you go far enough back in time there were several other predecessor programs, the Focus Center was, I believe, the first one. One of the efforts within the Focus Center was to develop a new form of transistor, which is now called the FinFET [fin field-effect transistor]. Most transistors that you're going to find in leading edge electronics, are all developed by this process that DARPA helped develop; I think this was about 1996-1997 timeframe, so quite a few years ago. 

An area that I particularly am involved in, that I think was very disruptive, was the wide bandgap semiconductor program, which developed gallium nitride (GaN). When I was a program manager in my first tour DARPA in the early 2000 timeframe — this was a program that I managed — that has had a tremendous impact, and I think it will have even more impact moving forward, in developing just a new class of electronics for RF applications for power applications and, separately, for optoelectronic applications. So these are a few examples, but there have been many, many more.

SL. So what are the advantages of GaN compared to other standard semiconductors? Let's say silicon. 

MR. GaN is what's called a wide bandgap material. It has bandgap, if I remember correctly, something like 3.4 eV, which is more than twice as high, quite a bit higher than silicon. The reason why that matters is when you are interested in power applications, be it for power electronics or high power, for example, a power amplifier in a RF switch, the bandgap turns out to be by far the most critical parameter. So it means that I can build a much, much better amplifier for RF electronics or a much better power converter. So for example, if you're interested in an electric powered automobile, the switching system for that automobile depends really critically on the electronics. The advantages that you have in high power applications for wide bandgap materials is tremendous; factors of 10 or even more, depending on what comparisons you're making. And it's rare that in electronics you find something that has an order of magnitude greater capability.

SL. Yeah, now I'm wondering whether the gallium nitride semiconductors are used for solar cells. Are they used at all?

MR. GaN, not so much for solar cells, because of matter of expense. But the technology behind GaN is finding a lot of use in lasers, particularly in lasers that are white or blue lasers. So, again, the bandgap of the material affects the frequency, the wavelength, at which the laser operates. In principle, these wide bandgap materials would be useful for solar cells, but they're still a little too expensive. And one also has to be cognizant, you're trying to match to the solar spectrum, which is not necessarily as far out… silicon is not bad for that, actually. So that's not its most used application… but for other things.

SL. Yeah. So is it correct to say that DARPA is something between a funding agency and the reserves facility without labs? 

MR. So DARPA is a research funding organization, there is no research that's actually done in the building that we're in right now. So it's about funding research. But I think what makes DARPA unique compared to other funding organizations [is that], firstly, it’s project based. So that GaN program that I talked about a second ago existed for a period of time to achieve a certain goal and when that goal was achieved the program was over. So that tends to focus a community on doing something specific as opposed to broad front extension of knowledge, which is great, but often is not quite as directed as the project-based approach. 

The second thing that's really different about DARPA compared to most any other place that I know is that DARPA attracts recruits, folks to come, program managers to come to DARPA to do that kind of specific research, but only gives them a few years to do it. Typically, it's four years. So that causes all sorts of stress, if you will, in the institution, you have a very small amount of time to get a very big job done. And then you're complimented for the work you've done and walked out of the building, and someone else gets to sit in your chair.

SL. So what are the steps that are involved in the inception and operation of a new DARPA program? And how do program managers come up with new programs? What's the procedure?

MR. Yeah, that's, that's our favorite question ever. Program managers ask themselves in the morning, when they wake up and right before they go to bed, and all throughout the day the same question: How can I develop a new program? 

So it starts with an idea and that idea often, sometimes, comes out of the head of the program manager. He or she has been in this area for a very long time and has a lot of experience. But just as often, if not more often, comes out of help, “phone a friend” kind of thing. People come and talk to us all the time and give us their advice, their ideas. So it's the work of an entire community. The program manager, in a sense, is representing that entire community. 

The trick is, and this to me is really the challenge, when I'm talking to people about coming to DARPA and being a program manager, I like to use the word "inversion". When you are an engineer or a scientist learning your craft in school, you're taught how to solve problems. And everyone who was employed in this area, at one level or another is used to the idea of how do you solve problems in constructive ways. What you're not told to do is how to invert that and choose what the good problems to go after are. This is the secret of DARPA, because a good program manager is someone who can look at the set of issues confronting us today and encapsulate something that they want to do as a problem that they want to go solve. So picking the right problem is the hard part. 

Now, it's very difficult to talk about the question you asked, about how do we proceed without mentioning the name of a former DARPA director, George Heilmeier, who developed this set of questions that one should ask themselves, and they're very simple questions. 

They're things like: What are you trying to do? That seems to be an obvious thing that someone who's spending money should ask themselves: What is what is the problem you're trying to solve? What are you trying to do? 

But surprisingly answering that question can be really hard because, in a sense, the answer that question is a boundary that starts to define what success is and starts to describe what things you're not attempting to do. And those things that you're not attempting to do might be very valuable. They're just not necessarily going to be within the scope at the time and the money that you have available. So these Heilmeier's questions they're called end up being kind of the trade by which program managers do their do their do their job.

SL. I wanted to talk about a little bit about a new program that you have at DARPA, the Electronic Resurgence Initiative, which involves also companies, I guess. So Electronic Resurgence Initiative was established in 2017 with the recognition that continued the US leadership in microelectronics was threatened in both the defense sector and the commercial industry. In fact in 1990 the US produced 37% of the world's chip supply, but now the country is responsible for just 12% of the global chip production. Over the past decades major Western corporations trying to maximise profits have outsourced electronics manufacturing to the East. From a DoD perspective, how's that negatively impacted the DoD? And what is DARPA doing to solve these problems?

MR. Well, so the problem that you're talking about is one of actually a cascade of problems that has been developing in electronics and that ERI effort, which as you said, started in 2017, was designed to address. So the offshore movement of fabrication, the rise of security threats to hardware, worrying about, particularly, one can see how these two are interconnected, if I'm fabricating critical hardware that might find itself in a warship or may find itself in an aircraft, I have to be very concerned that that chip does exactly what it was designed to do, nothing more, nothing less. What if that is fabricated in an overseas facility or packaged in an overseas facility? So there are lots of areas for concern. 

Another area that I think has been driving us is complexity. If you go back, I don't know 30 years ago, when MTO was formed — I'm not sure exactly what the right number is I'd have to look it up — but probably a circuit in that timeframe had 100,000 transistors, something like that. Now you can have a trillion transistors, we're getting close to a trillion transistors in a typical circuit. So how do you design circuits like that in anything like a reasonable period of time and a reasonable costs?

So there are any number of these factors that drove us to think about the technical surprises that might be ahead of us as a nation, if we keep proceeding down the same path. So ERI, you mentioned, you described it as a program, it probably is not best thought of as a program, it's probably thought of a sweet of programs. There's something like 25 or 30 programs right now at DARPA that are connected to ERI. But each of these programs is carving off and taking on a specific element of the overall set of objectives. But just as you articulated correctly, the overall objective is recognizing the commonality that exists right now between the commercial world and the defense world. We need to have improved, stronger, more robust national capability to source leading edge electronics. 

SL. How do you think the commercial sector can benefit from the ERI initiatives, the ERI programs, from the 25 programs you mentioned? How do you think that will benefit in the future?

MR. Yeah, I think a number of different ways and I think it's already happening. So to the extent that... let's talk about security, because that's usually the easiest one to recognize. There is an enhanced concern in the commercial world about security, ensuring, if you're an automobile manufacturer, if you're an airplane manufacturer, you have a financial and a safety reason to care. Maybe, for example, you're interested in automated vehicles, you have a very strong reason to care that all of the electronics that drives the decisions behind how that vehicle is going to operate, that you understand it, and that no one can manipulate the vehicle when they shouldn't, just much in the same way as if you're flying a JSF [Joint Strike Fighter] aircraft, and you need to be sure that you can trust that the electronics that goes into that aircraft is secure. So the programs that we've done, that have been helpful in this regard, and have made progress have benefited I think both the commercial sector and the defense sector. 

 

SL. Yeah, because you mentioned the hardware security, maybe we can discuss a little bit about hardware security threats, people are probably familiar with malicious software and software bugs and all these things. But we can have similar things in hardware as well. I mean, one could think of a computer keyboard that it's a hacked keyboard and whenever I type something there will be a hacker somewhere reading whatever I type and then might retrieve my confidential information and use it for nefarious plans or whatever, they will try to hack me, steal my information and so on. Why is trusted electronic so important and what happens if an adversary country slips malware into hardware intended for DoD use?

MR. So yeah, this speaks to technical surprise. If I get in my car and drive down the road, I'm relying increasingly on electronic systems to keep that car operating properly, everything from how the engine works, electronic ignition approaches to, I guess at the opposite extreme, entertainment systems, I suppose, listening to the music or something that's playing, that may be that'd be all that critical, but in between there any number of systems that I absolutely depend on. 

Radar has become a common thing, obstacle avoidance, and especially if we move into the world, as I mentioned before, of automated vehicles, then I'm really relying on sensors like LIDAR sensors to recognize and avoid obstacles. Literally lives depend on that safety, on that operation. So the mind almost reels, if an adversary were to try to create problems, how large what we, in the defense world, call the “attack space” would be? Because any number of problems, any number of scenarios are possible, if you lose control of the integrity of that electronics. And I think, it doesn't take much imagination to take everything I just said and apply it into the military domain as well. 

 

SL. That's right. And one of the things that is probably very, I would say, obvious is that software can be patched easily. But that's not true for hardware. How are you gonna patch hardware? It's a physical thing.

MR. Yeah, I think that's exactly right. So I talked about attack surfaces. An adversary could attack in any variety of different ways, and everything from the hardware through the software and anything in between. But just as you said, software can be patched, hardware is intrinsically much more difficult to fix. Now the way in which we have dealt with attacks that have been hardware-centric, have been to impose, in the middleware, new approaches to thwart that attack. But a fundamental flaw in hardware is very difficult to fix. 

So for example, we have a program that goes by the name of SSITH (System Security Integration Through Hardware and Firmware) in our office, which is looking at entire classes of hardware errors that might exist and trying to systematically remove those errors so that a “patch and prey” kind of mentality that says “every time we find a problem, we'll just fix it” could be avoided. Essentially, when you find a problem, you can close off that entire class of attacks, so that those can't exist anymore. And this program has been very, very successful. One element of this was we had what's called a bug bounty, in which people were given an opportunity to break some of the systems that we had built and look for bugs. And they did indeed find some, it was not perfect. But it was amazingly good, given the effort and time that was spent to try and break the systems in that particular program. 

So that's an example of a program that really we think will transition directly into the commercial world but we'll have just as an … on our defense world.

SL. Now talking about the fourth wave of microelectronics, how much do you think can 3D integration give us beyond standard 2D chip design?

MR. Sure, sure. Well, let me let me step back a second and explain what is meant when we talk about the fourth wave. 

So Electronics, has had this fantastic ride, called Moore's law, following Moore's Law, for 50 years, 50ish years. And this is a very familiar story, right? Gordon Moore predicted, I believe this was in the mid-60s, that the number of transistors will double every 18 months, sometimes 24 months, but more or less that there would be this geometric improvement in electronics that would sustain for a very long time. And indeed that is exactly what's happened. That's how we went from thousands of transistors in a circuit to billions and approaching trillions. Now, there have been several problems along the way. It hasn't been a continuous improvement as it's often represented. 

For a very long time, this is first wave, what sustained progress was that transistors kept getting smaller and smaller and smaller. And that was good. We went from transistors that were many, many microns to things that were starting to be measured in nanometers. So that persisted for a long time and allowed for improvements. But eventually, we got to a bit of a crisis when the interconnects were actually now much larger than the transistors themselves. So we could imagine a path forward in transistor development, but wiring and interconnecting them became the problem. 

So the second wave was backend of the line fabrication approaches, which have been incredible, and are not spoken of as much as the silicon but are equally phenomenal. So now we get to a very complex technology that allows you to connect to nanometer scale transistors. And that proceeded for another 10 years or so. Eventually, though, we got to something that I kind of referenced already, we got to a point where we started to run out of gas in terms of what we could do with the transistors. So that scaling argument started to reach its end.

We were then able to extend Moore's Law another decade or more, by going to these FinFETs that I mentioned before. So these are intrinsically three dimensional, they're not scaled versions of what we had in the 90s, 80s and 90s maybe. They are a different beast. It's a three dimensional active technology. Okay, so that's the third wave and that's where we are today. 

But we're starting to get to the edge of what's possible in those technologies as well. And we're starting to reach a point where the only thing that we can do is to push into the third dimension. And the way in which I think about it is we are going now from three dimensional active technology to three dimensional passives, sort of in analogy to what happened between the first wave and the second wave. So the fourth wave what's ahead of us, we will now be imagining dense structures of different types of electronics, different types of transitions, not necessarily all silicon, they're connected together in complex ways. So the manufacturing of such an object, the design of it, the ensuring the security of it, trying to emulate how it's going to operate, all of these things, and especially the fabrication of it, all of these things are unsolved, unknown problems, but they will offer the only real path forward to improving electronics into the future. 

Now, you asked me how much and how far, just like Gordon Moore, I don't know how long, but I think that there is every indication, we're seeing lots of indication in the programs that we're looking at that the opportunities are an order of magnitude, multi order of magnitude, and maybe sustainable for quite a while.

SL. Now, out of curiosity, in terms of 3D fabrication, because I'm quite familiar with that, because I've worked on that. Are you doing any program, is there any interest in terms of 3D deposition, drop on demand or atom on demand, which is also possible, maybe using my microscopy techniques, that's really advanced, but drop on demand is possible, it can be done with organic electronics. Are you doing anything in that direction?

MR. No, it's a really good question. Our partners at DARPA and the Defense Sciences Office have a program in which the organization at the atomic level of individual atoms is being explored. There are lots of opportunities in that regard. 

For us, we are still living in a world where critical dimensions typically are on the order of a few nm, maybe 10 nm, that's still a lot of atoms. So we're not at the point of picking and placing individual atoms, but it's not that far off. 

I think in our regard, going back to the narrative I was saying before, I'm more interested in precise positioning of very small electronics, microchips, chiplets, as we often call them. So you can imagine fabricating chips in conventional ways, but then assembling them in ways that are intrinsically much more involved and intrinsically three dimensional is what I'm trying to say. 

SL. So you would place the chips... and so this becomes the new unit, basically, the chip becomes the new unit that you can place in a three dimensional space, you're saying...

MR. Yes. A great example of why you might want to do that is… in processors today what we're limited by are the interconnects and what is called the “memory wall problem”, which is a problem with the kind of von Neumann processors that we use today in which the architecture, where you do computation and where you store information are separate so you have to go fetch information and bring it back. If you're able to, what you would like to do is have memory much closer or even within where you do a computation. And the net result of that would be kind of order of magnitude improvement, multi-order magnitude improvement, in computational efficiency that I was alluding to a second ago. Mostly because today, that's where your microprocessor, your PC is spending all of its time and energy moving data. If you get rid of that problem or you reduce that problem substantially, you'll have a much more efficient computational device.

SL. So basically, we can think about a 3D space where you can position, at the microscale CPU, GPU and memory and then CPU, GPU and memory and so on. Right? So that's something like that. 

MR. Something like that, actually, we have we have two or three programs in the office already, that are exploring aspects of just what you said, having the ability to do hybrid programmable processing. And you want by the way to do this in ways that are reconfigurable that are not simply a static, you map it out and that's it. Because that allows you to optimize.

SL. Now in terms of going beyond von Neumann architectures, I'm interested in cognitive computing and neuromorphic engineering. Cognitive computing is basically a computer that hardwires machine learning algorithms into an integrated circuit that attempts to reproduce what happens in the human brain, and neuromorphic computer is any device that uses physical artificial neurons to do computations. My question is, what's the advantage of doing AI directly on hardware rather than on software?

MR. Well, I think it's like everything else, one needs to have an AI implementation that is both hardware and software working together. One can implement an AI algorithm on a CPU. One can implement it on a GPU but computational engines that are focused on efficient implementations of neural nets have order of magnitude advantages. And so, that has driven a lot of different organizations, and here, I think the commercial world is far in the lead, but… driven commercial organizations to develop specialty processors that are customized for machine learning, neural net, deep neural net type applications. However, having said that, our belief is that we are just scratching the surface of what is possible in accelerating computation in this regard, and so, we think that there are leaps forward that can be made in trying to reduce the amount of power required for AI and improving it in other ways as well.

SL. Is it fair to say that a neural network, a hardware-based neural network consumes less power than a software based neural network?

MR. That's absolutely fair. The figure of merit at the end of the day is how many, even for neural nets, one can calculate how many operations you're doing. And so how many tera-operations are you doing per watt of power. The goal is to push that forward and forward. The problem is that, by its very nature, particularly as problems get harder and harder, neural nets burn an awful lot of power. They require a lot of operations. So this efficiency parameter that I just spoke of, is important today, and probably going to be even more important in the future.

SL. They burn a lot of power when you train them. That's very computationally expensive, right? But when you use them to burn less power, right?

MR. Yeah, that's true, but nonetheless, the number of operations tends to be very, very large.

SL. You also have a program called DARPA's Fast Event-based Neuromorphic Camera and Electronics, which seeks to develop an integrated event based... so basically, it's a new type of camera, infrared camera. What's that about? And what kind of issues do you have with the current systems? And how do you think this new neuromorphic camera will fix them?

MR. Well, neuromorphic cameras are really an interesting subset of this neuromorphic processing that you spoke about a moment ago. For the purposes that we're thinking about, let's think about a problem, where maybe you're monitoring, I don't know, perhaps your door, perhaps you have a sensor that is looking for a package delivery person coming and knocking at your door. In the kind of example that I'm thinking of, you would have an image that's not very interesting, where almost nothing happens, ever, right? Because we're not really interested in the door, we're not interested in the sidewalk in front of the door, and all the parameters around it, those don't change. What we're really only interested in is did a packet show up, right? So was there a change in the scene that we cared about? In a lot of systems, military and industrial, we capture enormous amounts of information. And it's like that picture of the door, most of it has no value. So a neuromorphic camera is intrinsically eliminating, filtering out all of that uninteresting information. And really looking at only the parts of the scene that are novel, in which something is going on that you need to be paying attention to. If you could realize such a camera in an efficient way, it would be very low in power, potentially, because it's only taking parts of the image that you care about. And also, and this can be even more important, the total amount of data it produces can be orders of magnitude less. So what it's returning to the user — the user might be itself a processor — is much more manageable. So the user doesn't have to use as much power either. So there are huge advantages. And people have implemented neuromorphic cameras, this is not new. 

The problem is that in the real world, it's usually not as simple as the static picture of my front door. Let's imagine that my front door next to my front door, I have a tree that has leaves rustling in the wind, things are changing all the time, but not in an interesting way. So how do you capture the package from the rustling leaves? That's where it gets really interesting and that's what that program is going after. 

SL. So there will be plenty of applications in the commercial sector as well. For example for cars, like for vehicles, for driverless vehicles and things like that as well... 

MR. Yeah, I completely agree with that. So cars are becoming more and more like military systems all the time, in the sense that they have, they're equipped with more and more sensors. They're collecting more and more information. And the problem very quickly becomes what do you do with all that information? So this is one way to help.

SL. That's right. Another system... a program that goes beyond the von Neumann architecture is the Quantum Inspired Classical Computing, which, according to the website aims at solving DoD optimization problems using 500 times less energy. So how are you planning to do that? And are these simulated quantum computers?

MR. No, it's great. Thank you for asking about that, because I'm really excited about that program. 

So the QuICC program, Quantum Inspired Classical Computing, is really developing an entirely new paradigm. The way in which I like to tell this story... when I was young, in high school actually, I remember, I'm old enough that I remember being shown an analog computer and being told that this was state of the art. In fact, I was told, I was shown a box of an analog computer and a box that was an early digital computer. And I remember being told, by the time, by a professor who was giving this lecture, he said, “One of these two boxes represents the future of computing, we just don't know which one”.

Well, a number of years have gone by and the analog computing world certainly fell by the wayside. And we know which one has won this story to this point. But the QuICC program is challenging the fact, the assertion that digital computing is the right paradigm, it really is returning to something that is actually in between those two. There's not a lot of quantum in QuICC despite the Q in the title, what QuICC is about is taking a hybrid approach in which computation for certain classes of computation is done in an analogue way, but is administered to, it is controlled by, and monitored by digital computing. So, think of an analog computer with a digital shell wrapped around it. 

So, you mentioned optimization problems. To me and optimization problem is... the number one optimization problem’s always the classic traveling salesman problem: how does the salesman decide what the right route is to go on to minimize their distance? 

It's a tremendously hard problem, actually, despite the fact that I think we all intuitively think this is simple. As the number of places goes up, it becomes harder and harder to calculate, it grows in a very… it scales, as they say, in a very bad way. 

For these kinds of problems, you can develop analog physics kind of analogues to the to these problems and then use the overall approach to calculate a solution for a very large class of these problems, that you could not do easily or with as much energy efficiency if you try to do it just using digital techniques.

SL. Okay. So, the example that I can think of is when you compare a mechanical system to an electrical system or a water system with an electrical system, where you have a [water] tank [which has constituting equations equivalent to] a capacitor and so on...

MR. In principle, yes, you could use a water system, but these are all electronic. So, any water based system could be, you know, essentially, current flowing, you know, electric current flowing is the analog to actually water flowing. 

So you can build a system with the right physics to describe the problem electronically. And then as I said, use a variety of algorithms that you're going to develop to condition and generalize the problem and collect data from many instantiations of that analog element that you're going to replicate.

SL. Now, the other thing I was interested in is extreme temperature, electronics that can work at extreme temperatures, both at low temperatures and high temperatures. And you have a program that just started, it's the Low Temperature Logic Technology program, which is planning to develop liquid nitrogen temperature device technology to achieve a factor of 25 times in performance and power compared to state-of-the-art room temperature CPUs. So I mean, you just started, but I was wondering what you're planning to accomplish with this program? And where do you see these types of devices being deployed? Also in the commercial sector...

MR. Great question. So you did a pretty good job at describing the program already. People have known for a long time, that computation is easier, more energy efficient at low temperatures and ideally, the lower the better. The problem, however, is that there's a lot of cost in refrigerating and cooling electronics to lower and lower temperatures. So how do you do something where you don't lose more energy in cooling than the efficiency that you gain, that's the trick. 

The interesting part about the program, you mentioned, the LTLT program, is what we're doing is taking conventional approaches, the CMOS, that we all depend on, the silicon that we all depend on and optimizing it, so that it will operate at liquid nitrogen temperatures. And we chose liquid nitrogen, because it seems to be right about the right crossing point where the cost of refrigeration is not that high, in terms of power, power costs, but the improvement that you get is, is very significant. So it seems to be an optimum. People have recognized for a while that there was opportunity here. But but if you don't optimize the transistors, all of the devices to operate at a low temperature, you find that the gain that you get in CMOS, by cooling it, is not sufficient enough to pay for all of the system costs in in providing that 77 K degree cooling. So our projections are that there is, as you said, a factor of 25ish improvement, that's possible. Even taking into account the cost involved in cooling, we think we'll get about an order of magnitude performance improvement. 

So where would you use that? Like a data center that uses a tremendous amount of power, a tremendous amount of power, or a more compact versions of data centers that are less ambitious, but still provide a tremendous number of calculations.

SL. Rendering, architectural programs, there's plenty of stuff, filming industry, lots of things, MATLAB simulations, image processing, there are so many things that can be done with that. And one question I have that just came up now, are you planning to use any superconducting wiring there with the interconnects because that low temperature, maybe there is some material that switches to superconductor at that low temperature... is there anything...

 

MR. Sure. So that's a great question. And, for example, in the quantum computing, environment, many of the approaches are superconducting. So right now, we have no ongoing programs that are focused on that area. But it's certainly an area of interest to us. Again, from the standpoint of implementation in the systems that have military relevance, a lot of the times going to something that's superconducting might be prohibitive because the cooling demands to become very large. So there's a balance here, your computation part works better, but your overall system power demands go up.

SL. Until we find a superconductor that works at room temperature. That's a long way, I guess.

MR. Well, that's been a goal since I was in grad school.

SL. Now, on the opposite side, there are high temperatures. Now low temperatures, are generally good for electronics, but high temperature is not, in fact with high temperature, you get multiple issues like parameters deviation. If you think about the diode curve, the simple diode curve is dependent from the temperature in the exponential. And also you get an increase in the failure rate of components. 

So there are multiple scenarios in which we need electronic components that can survive at high temperature. The first thing I can think about is the space industry, space sector, even the new hypersonic missiles. So is DARPA thinking of any program that could address this issue of trying to make electrics more resilient at high temperatures?

MR. You mentioned before, that we are interested in ERI in moving towards behaviors of electronics in extreme environments and one of the first things that comes to mind is exactly what you're asking about here. So you mentioned hypersonics, but you don't even have to be that exotic, you can think about turbine engines that exist today and are used all the time. Typically, the ways in which people sense, the performance of these engines is very difficult now, because the temperatures are much higher than the temperature of any electronics that you can expose to. So what we would like to do is build a new class of electronics that is more rugged, and tolerates those kinds of temperatures. And there'll be just a raft of opportunities. Again, not to focus so much on automotive, but even your car engine would be yet another place where it would be ideal to have these capabilities, but they have not been realized.

SL. To have sensors inside the engine or things like that, you mean? 

MR. Sure. Exactly. To optimize gas mileage, optimize, you know, an electric vehicle, as well, optimize the performance as a function of temperature. One can make inferences today, but you're not able to measure as close as you would like, if you're not able to put the electronics within the engine.

SL. Okay, now in terms of, changing topic a little bit, in terms of existential threats, our survival is based on electronics, basically, and we become dependent on electronics. I mean, think about the Internet, think about all the electronics that allows us to survive, basically. So but these electronics are threatened by things like electromagnetic pulses, which can be generated by natural sources such as solar storms, or by artificial sources, such as high altitude EMPs. And this could disrupt the entire Internet, shutdown the entire Internet, damage most electronic. Other than Faraday cages is DARPA working or thinking of doing a program to deal with this kind of issue?

MR. It's a really good question, a very timely question. We do not have a program in this area, but it's a very fruitful topic for research to understand how to make electronics this less sensitive. By the way, you mentioned Faraday cages, that's a perfectly good way of ensuring that your device is more tolerant of EMP pulses. But another one comes back to the thing we're talking about some time ago, wide bandgap semiconductors, because their ruggedness, their ability to tolerate an EMP pulse is likely to be much, much higher. So I don't think that we're going to see a world in which the state of the art Intel microprocessor is going to be done in gallium nitride or gallium arsenide or any such material. That's probably too far out. But critical pieces of electronics may be, where you're particularly concerned about EMP, there may be opportunities there. 

So in answer your question there's not a specific program that I can point to today, but that's something that we hope to change in the near future.

SL. Okay. Now, what kind of far out ideas is DARPA looking at which are too wild or too unprofitable for the commercial sector?

MR. Um, so, so I find that a lot of what we do in MTO, falls into one of two camps, either it's a problem, like many of the ones we've been talking about, security, that is very mainstream, you know, it applies to automobile companies, as well as to defense companies, or to the Department of Defense. 

There are other things that DARPA cares about that probably are not that generalizable, they are more specific, and yet some of these things that seem very unique, end up being game changing. 

I'm thinking right now… there's a lot of work in our office in positioning navigation and timing, PNT, it's usually called. So for lack of a better term think GPS. GPS itself, however, was a technology that DARPA helped develop, that was never intended to be commercial, never thought of, in terms of its commercial benefits. It was about doing positioning of mobile platforms anywhere in the world, identifying where you were, without having a compass and a complex means of triangulating where you were. And we've all seen where that split. Right. So there are programs in the office now. Think of it if you want next generation of PNT, developing better clocks, for example. Those could be potentially of great value commercially in the future in a variety of different ways. But we're really more focused on the defense applications at the moment.

SL. Is there anything that you would like to do that seems impossible to do at the moment? Well, that's, that's a sci-fi question...

 

MR. Um, I think you're bounded by your imagination. Of course the rules of physics have something to do with some of these things. But one of the fun things we were talking about being a PM (Program Manager) earlier, one of the great things about being a PM is to try and think bigger than you've ever had an opportunity to think before. That GPS example was not a bad one. Because the thought that said, I'm going to put satellites into orbit that will broadcast information and I'll have a radio that essentially collects all this information and can locally process it and tell me to spit out exactly where I am. That sounded like science fiction at that time. And then the next generation beyond that, that said, I'm going to reduce that entire rack of equipment, to something that I can put on in my pocket. That also sounded like science fiction. So I don't I, I don't want to tell you all my good ideas. They all sound like science fiction. 

SL. Okay, the last thing I wanted to ask is that, is it possible for overseas entities like academia or overseas companies in the West to approach a DARPA's program manager? Can they be part of a program?

MR. Sure, they can and they are. It depends on the program to a degree, some programs are classified, and nearly, you know, it'd be very difficult if not impossible, for such programs. But the vast majority of what we do is quite open and there are not it's not just theoretically possible, it happens quite commonly, that we have performers, who are universities or industry located elsewhere. 

SL. Okay, Mark, thank you very much for your time. It has been a great pleasure talking to you and I've learned a lot…

MR. Thank you. I appreciated it. And I enjoyed it, too.