Podcast: Play in new window | Download
Subscribe: Apple Podcasts | RSS
- Chris and Scott were talking on email about the JLink (edu) debugger that Chris got from Adafruit.
- Scott attended U of W before starting at Google on the maps team working on the bike directions overlay.
- This layer was part of “Ground truth” initiative at Google, a way to map data from the real world. Their primary dataset was from the rails to trails project.
- After he left google he was working on quadcopters and drones,which used the cleanflight / beta flight software. This was for his company Chickadee tech.
- Scott talked more about the project on the Macrofab Engineering Podcast (MEP22)
- His personal/portfolio site (as well as many of his online handles) is called Tannewt
- After showcasing quadcopters on Adafruit’s Show and Tell, he was asked to port micropython to the SAMD21.
- We have talked about MicroPython on the show before when Tony Dicola was a guest
- A good place to start is asking “What is python“
- Automate The Boring Stuff book
- Interpreted vs compiled languages
- Machine code vs byte code
- MicroPython vs CircuitPython
- Ports coming for SAMD51, nRF52
- The python struct library helps for interfacing to C and hardware.
- Scott has a great tutorial about using a JLink debugger.
- Microtrace buffer allows you to store the trace location in RAM.
- You can download the latest CircuitPython release for various adafruit boards. This is the binary you can reload onto your chip that will allow you to start dropping files onto the mass storage drive.
- Radomir/deshipu showcases CircuitPython via the uGame, which is sold on Tindie
- Emily Dunham “How to automate your community”
- Bill Gates answers the “tabs vs spaces” question
- Ask an Engineer is a weekly show by Adafruit.
- Adafruit has a Discord server where they discuss CircuitPython (and other projects)
- The Adafruit CircuitPython group does a weekly voice meeting on Discord, it’s later posted to YouTube
- A blog post about the plans for CircuitPython in 2018
- Scott is the one running the Seattle 3H group (Hardware Happy Hour)
- PyCon in Cleveland 2018 and 2019
- The Open Hardware Summit (OSHWA) is in Boston on Sept 27th, 2018
- Reach Scott on the Discord server or via email
Liam Redmond says
Sorry, had to comment. Screaming at this podcast….”the debugger is ON the chip”.
Arguably the single most important aspect of a modern microcontroller compared to a bare CPU from days of old is that it has a built in debug controller.
This is a key element of what makes these things cheap and easy to use. In the olden days we had to buy debuggers costing 10’s of 1000’s of pounds to not even have the same functionality.
These would intercept the actual pins of the CPU and look for address and data matches in real time then stop the clock of the CPU.
Your Jlink/STlink is not a debugger it’s a JTAG/SWD hardware interface and gdb is a software interface.
Love the show by the way 🙂
John says
Started listening to the podcast. Really good. Has only 25 episodes in the feed for iOS. Has only App for Android. Any chance that iPhone users also get something that helps them listen to all episodes?
Erik says
There’s a saying that you don’t fully understand a subject until you can explain it to your grandmother. It was clear this week that none of you knew what you were talking about. The show is usually great. This was not.
Jon says
Awesome AMP Hour episode. I love how you guys mix it up. It’s always nice to hear folks that have made the switch from software to software/hardware since that is where I am heading. The info from Scott regarding debugging the SAMD processor is very timely since I am in a similar position trying to learn to debug the STM32 line of chips. Thank’s Scott for sharing your knowledge in this area.
In response to the comment by Liam previously, could you please point to the Datasheet reference that shows where the debugger is on the chip? As far as I understand it, the processor has registers that could be used for debugging and you need to load code in order to enable that debugging and use a Debugger/Programmer such as the J-Link Edu to perform such debug.
Comment from the SAMD Datasheet:
http://ww1.microchip.com/downloads/en/DeviceDoc/Atmel_32-bit-ARM7TDMI-Flash-Microcontroller_SAM7X512-256-128_Datasheet.pdf
“The ARM7TDMI EmbeddedICE is supported via the ICE/JTAG port. The internal state of the ARM7TDMI is examined
through an ICE/JTAG port.
The ARM7TDMI processor contains hardware extensions for advanced debugging features:
– In halt mode, a store-multiple (STM) can be inserted into the instruction pipeline. This exports the contents of the
ARM7TDMI registers. This data can be serially shifted out without affecting the rest of the system.
– In monitor mode, the JTAG interface is used to transfer data between the debugger and a simple monitor program
running on the ARM7TDMI processor”
Please clarify.
Looking forward to the next episode.
Cheers,
Jon
Liam Redmond says
From the ARM7TDMI technical reference manual 🙁http://infocenter.arm.com/help/topic/com.arm.doc.ddi0234b/DDI0234.pdf).
Sections 1.3 and 5.2 give nice diagrams of how the core, the TAP controller and the embedded debugger fit together.
It has of course evolved over the years for different cores but for the ARM7TDMI they call it the EmbeddedIce RT Macrocell. The TAP (Test Access Port) controller is what drives this to the outside world and what the debug adapter connects with (TDO, TDI, TCLK, TRST etc…). SWD is I believe a further serialisation of the JTAG signals, it has the same effect though, allows an external debug adapter to interface with the internal TAP.
The TAP is connected with the debug controller via the “scan chain”, which (simplistically) has command codes like “set breakpoint”, “read memory” and “write memory”. The scan chain is filled serially, which is why it’s cool, it allows multiple scan chains to be accessed (if you have two or more JTAG capable chips on your board ARM+FPGA for example).
It’s things like OpenOCD and other debug servers that translate what you need to do (set a breakpoint, read some memory) to the commands required to program the embedded debugger through the TAP.
Some cores have multiple break point registers and data read/write break points along with a variety of trace support. OpenOCD tries to make sense of all the subtle differences between all the different TAP controllers and capabilities in various cores (not just ARM).
One interesting point is that when single stepping, the debug server needs to perform quite a dance on the JTAG interface to get it right. It has to know where it is (by reading the PC register), set a breakpoint at the next instruction and continue execution. Depending on the length of the scan chain that can be quite a lot of bits to clock in.
Of course you can do much more than debug and trace with JTAG you can also test the board pin connections and such like. The “D” in ARM7TDMI means “debug” and the “I” means EmbeddedIce Macrocell. So the ARM7TDMI has both debug extensions and ICE whereas the ARM7TDM has no ICE.
The ICE does the job of an old style ICE (in-circuit emulator) but because it is tightly coupled to the CPU core (not just the external pins of the device) it can do much more.
The Cortex-M range of cores use the DWT, ITM, TPIU and ETM. Basically the same idea as the ARM7TDI but with greater functionality. The Cortex-M technical reference manual is probably a more relevant read for todays micros.
http://infocenter.arm.com/help/topic/com.arm.doc.ddi0439b/DDI0439B_cortex_m4_r0p0_trm.pdf
Hope this helps.
Jon says
What you have described is the debug interface capabilities of the processor. The debugger is still an off chip device such as a J-Link Debugger/Programmer.
At least that is how I see it.
Liam Redmond says
Indeed. It raises the actual question of what is a debugger. To me it’s the hardware device that monitors the data and address bus and stops the processor where I asked it to. To others it’s something on a menu in an IDE and others it’s something else in between.
The crux is none of the other stuff would work if you didn’t have the embedded ICE.
Jon says
Perhaps my view of what a Debugger is has been skewed by how the interface has been typically decribed. There may be a reason why Segger refers to to it as a J-Link Probe.
But then again they do refer to Ozone as The J-Link Debugger. However, OCD is the On-Chip Debugger but it is a software tool. Sort of like a Where’s Waldo scenario.
Thanks for sharing your knowledge on this. It been helpful.
Cheers,
Jon
Glenn Nelson says
I’m sure by now you’ve received innumerable explanations about compilers and interpreters. Here’s one more.
A compiler takes high-level code and generates “machine code”. The machine code is targeted for specific processors, hence you need a compiler for each processor.
But note that the machine could be pseudo-hardware, aka, “virtual machine” (not to be confused with virtual machines such VMWare or VirtualBox).
If you are using a VM to run a program. then a language is compiled to pseudo-code and the VM runs the pseudo-code. That is to say that the VM emulates
a microprocessor, but not necessarily an actual HW microprocessor.
The most widely deployed of these virtual machines is JVM – the Java Virtual Machine. Java programs are compiled to Java byte-code.
There is a JVM for each OS, e.g., MS Windows, MacOS, Linux, FreeBSD, etc. The JVM itself is actually an interpreter of the byte-code.
Java is an enterprise language, hence the JVM is written for the OS, rather than just for a specific processor. Java claims to be “write-once, run anywhere”.
Although this statement is often derided as a lie, in my experience it’s true.
Obviously a language compiled to a VM will not run as fast as a language compiled to native machine code, but the performance hit is not as bad as you might imagine.
An interpreter reads each line of code and executes it “on-the-fly”. BASIC was one of the first interpreted languages.
To the best of my knowledge, languages such as LISP, Forth, PHP and Javascript are interpreted.
With interpreted languages, you merely edit the code and rerun – no compile step is needed.
Python is a virtual machine interpreter. Python behaves like an interpreted language. But each time you run your Python program it is first
compiled to a “pyc” file (python compiled) and this is then run by the Python interpreter/virtual machine. There are also variants of Python such as
Cython (compiles to C code and then to machine) and Jython (compiles to Java and then to Java byte-code).
JIT stands for “just in time” and usually is shorthand for just-in-time-compiling. I believe Java uses it to actually transform Java byte-code to machine
code while your program is running. In the case of Python, it means that your Python program is compiled to pyc on-the-fly.
Python History:
Python was developed by scientists who wanted a faster way to prototype and test code than FORTRAN. Python was designed from the start to be able
to use FORTRAN libraries that had been compiled; thus things such as matrix math could be very efficient in Python if linked to the FORTRAN library.
Even today, all Python environments include a FORTRAN library linker.
Python Today:
Used extensively by Google, Facebook and others to run their enterprise Big Data services.
Probably the most popular language for machine learning.
R is another language for M/L, it began life as the open source variant of “S”, a language for statistics.
Liam Redmond says
A nice description. I would add that even some common interpreted languages can essentially be compiled too, like Forth and BASIC and indeed you can get interpreters for C and C++.
ivanjh says
Here’s one more…
Let’s say a CPU’s native language is French. You are writing instructions in English as it is quicker and less error prone for you.
A compiler is an English to French translator. It takes your complete English instructions and writes a full set of instructions in native French. When it runs, the CPU only sees the French. It runs very quickly as the CPU’s only job is performing given actions in it’s own language – and the translator’s intimate knowledge can produce very succinct French. This is very difficult as writing French has many rules and subtleties.
An interpreter is a French speaking CPU who – for the set of instructions to follow – is given a French authored English-to-French phrase book. Your English instructions are provided to it as information to use when following the phrase book. At every step of your instructions, it follows the phrase book to recognise the English and then actions the French translation. It runs much slower since the CPU is constantly matching your English into something it understands how to do, and the phrases available are very limited compared to the full scope of the French language.
Now, you want to run on a 2nd processor as well – one that understands German.
This means you’ll be maintaining an English to German phrasebook as well.
What if… instead of having to constantly worry about English, I introduced an intermediate step? A translator that converts the English into a common numbered set of known actions – we’ll call Numeric Phrase References – only once (maybe in a manual step, or maybe automatically at startup).
Your instructions (rewritten to a set of these Numeric Phrase References) could then be made available as data input to the CPU who is following the instructions in a much smaller Numeric-German phrase book to undertake the task. It’s very quick to decode those, because the numbers are simple to lookup.
These Numeric Phrase References (called p-code or byte code) are like instructions for an intermediate machine that doesn’t really exist – a Virtual Machine. Once the common phrases are decided – it’s much easier to create a numeric phrasebook for a new language than to train a fully fluent multi-lingual translator.
To complicate matters here – many high performance Virtual Machines (Java,.Net,PyPy,Javascript) include routines that “Just In Time” actually do turn the bytecode into native CPU instructions (JIT compilers). This can actually be quicker than code already built elsewhere since the JIT compiler knows the exact CPU it will run upon – instead building for a lowest common denominator (e.g. i386).
With all of these options – your initial English instructions are exactly the same. It’s just the method/speed in which they get executed (what the CPU does can end up vastly different).
Chris Gammell says
I really like this explanation, thank you!
benn686 says
Would have like to have heard more about the differences between micropython and the circuitpython fork. For example, does circuitpython continuously update along with micropython updates? Or if targeting an unsupported processor (ATMega1284, SAME70), which is easier to port?
Why does Scott not like to use IDEs (MS MakeCode, Eclipse, VisualGDB, Keil Eval (64K code limit), Visual Studio GDB extension, etc)?
Also, for the hobbyist, the ST tools and discovery boards are usually much cheaper than Microchip/Atmel/Segger tools (see the Black Magic Probe, or the $2 blue pills emulating the st-link).. any insight as to why the SAMD21 was chosen over something from ST?
For example, the $50 AT Atmel-ICE looks to have SWO debug trace capabilities, yet Keil’s uVision doesn’t support it thru it’s generic CMSIS-DAP interface… Any idea if Atmel Studio or Eclipse can take advantage of it?
https://community.atmel.com/forum/how-display-itm-based-output-atmel-studio-arduino-due-j-link
https://mcuoneclipse.com/2016/10/17/tutorial-using-single-wire-output-swo-with-arm-cortex-m-and-eclipse/
I like using Slack/Mattermost, but Discord looks really interesting.
Scott says
Hi Benn686, Scott here. Thank you for the thoughtful questions. Hopefully you find these answers thoughtful. 🙂
More differences between CircuitPython and MicroPython are documented here: https://github.com/adafruit/circuitpython#differences-from-micropython
CircuitPython updates to the latest MicroPython release when beginning work on the next major CircuitPython version.
I believe CircuitPython is slightly easier to port because its got more common code factored out. The way its factored out (shared-bindings and supervisor directories) makes it particularly easy. (ATMega won’t work because its 8-bit and MicroPython assumes 32-bit.)
I haven’t sat down and really tried an IDE so please take these opinions with a grain of salt. I don’t like to use IDEs because its hard to apply the techniques learned for them across different platforms. By learning and using tools separately I can use some of them later for a different platform. For example, the GDB skills I learned at Google working on servers still generally applies to using GDB with a microcontroller. This is also why I’m a printf debugger first and foremost, its the debugging technique with the broadest support across platforms. It was a shock to me when I only had an LED to blink though. 🙂
The SAMD21 was originally chosen for the Arduino Zero so I don’t have insight into that particular decision. I wasn’t at Adafruit when most of the SAMD21 products were released either. However, I know having existing Arduino support and documentation is a huge benefit for us because its less work to support. Furthermore, at the time, Adafruit got a good price for the chips. 🙂 Most of our users simply rely on the bootloader and serial to debug Arduino sketches rather than SWD debugging so dev tool cost isn’t an issue for us. Porting MicroPython to the SAMD21 made sense because Adafruit was already invested in the SAMD21. Debugging CircuitPython happens over serial so dev tool cost doesn’t matter there either.
I don’t know about Atmel Studio and Eclipse support for SWO because I don’t use any of it. When I printf debug I use the USB serial connection and gdb when that fails.
Feel free to ping me (tannewt) on our Discord (https://adafru.it/discord) if you have more questions. I’m happy to answer them.
ben686 says
Thanks, will do!