Podcast: Play in new window | Download
Subscribe: Apple Podcasts | RSS
- Chris has been making antennas on a mill and measuring them on a VNA.
- Past guests Pete Bevelacqua and Jeff Keyzer offered their help about the
- Hardie Pienaar was nice enough to simulate the antenna Chris cut to show what it should look like. It matched the reality well (this is the cover image for the episode).
- Alan Wolke made a great video about how to use Smith Charts
- Chris had to buy a bunch of n-type connectors, leading to a term he called “KonnectorPanik”
- Apple is moving some of its manufacturing out of China
- Looking up the term “student destinations” for different schools shows where students go on to work and how much they make. We looked at Carnegie Mellon’s recent reporting.
- We heard about Kong.cash from Paul Gerhardt. It’s a “physical cryptocurrency” based upon the Etherium blockchain (Chris edit: trying to hold it together to get to the good parts here).
- There is a flex PCB with UV printing and on-board security chip that allows users to validate transactions.
The ATECC608A holds private keys, and enables a bunch of internet connecte devices these days. - “The chips are a keypair that identify the bill to an escrow contract. That contract holds a value timelocked for 3 years. You can check that the value is still there via NFC and even interface to it via the electrical connectors.”
- Dave did a (non-crypto) wallet review
- Dave also did a teardown of (not just) a millivoltmeter. The hinged design and metal cans over each section were beautiful.
- Dual display AC millivoltmeters
- OnShape bought by PTC
- Openscad, a project started by past guest Clifford Wolf
- Flash memory is bricking Teslas
- Someone posted about a “chip printer” on Kickstarter. What the heck is an n-type filament?
- There’s a $5 FPGA board coming to Seeed studio (h/t to Frank Buss) based upon the Gowin FPGA from Alcom
nopls says
The whole flash memory thing is really really embarrassing for Tesla and I wonder how it ever got past any sort of quality control.
First, they used the completely wrong flash type(eMMC) that is more commonly found in really cheap laptops and SD cards and is very well known to not be able to sustain any kind of prolonged write load.
This makes me think that this storage was never intended to be written to at all,
it’s probably only to hold the OS and the occasional software update. Seems like they forgot to mount that drive as read-only.
But even then, if they used proper SSD flash instead, this would not have happened. Enterprise SSDs are literally rated in DWPD(drive writes per day), most are 3 DWPD over a period of 5 years and even then, the manufacturer massively underrates these. They are probably good for 3-4 times that.
Then, like dave said, they should have just written to ram and then dumped that to disk once a day. That way they could even use some aggressive compression,
like xz, reducing file size by ~50%(assuming the logs are plain text).
Seriously, I considered these things when I set up a raspberry pi to connect my mum’s laptop to her speaker over wifi.
That thing never writes to it’s SD card, except when it does software updates. Not even hard to do either, it’s the default mode on alpinelinux(the embedded distro I used).
Matthew Suffidy says
No one was thinking about the flash going down, but it did. I guess what should be learnt though, is that the vehicle should still work with a number of parts compromised. I don’t know if it was established Linux had anything to do with it but Linux does not have a c:, it has definable mount points and raw devices.
Tony says
Another vote for the flash memory thing being embarrassing for Tesla. Working around a limited write endurance is an effectively solved problem. The two things at an absolute minimum are wear leveling and over-provisioning.
The 8GB eMMC used by Tesla already does the wear leveling on its own, it would have likely died much sooner if it didn’t. Over-provisioning by using a 16GB eMMC (and keep 8GB as the usable space available) would have doubled the life span. MLC eMMC modules operating as pSLC can get closer to an order of magnitude more writes.
Killing an 8GB eMMC with log data would be impressive. It makes me wonder if one of the software people had it writing and re-writing a small log without thinking about it. You could just continually write 10s of bytes to the same spot and have the wear leveling eat the drive alive. I’d put money on it being used for dynamic data. Even at an endurance of 0.25 DWPD over four years that can be almost 3 terabytes of data written.
They could have looked at the enterprise storage industry and seen all the tricks vendors have been using for as long as flash-based storage as been in use. Even outside of storage that was optimized for high endurance, you’ll see use cases such as where Dell over-provisions their SD cards by 100% that can be used for booting a hypervisor.
Like Dave suggested, keep all dynamic writing in RAM and dump to storage at set intervals. Writing in larger blocks also gets around some of the problems with write amplification cause by wear leveling algorithms.
Seth Tucker says
Dave is wrong about dumping that much data to the cloud. Sure for 45$ a month, you could dump ~100Gigs to the cloud, but that’s an insane cost to pay for a vehicle you’ve already sold. I mean, after five years, that’s $2500. That’s a pretty decent bite into their profit margin, when they could just, you know, spec the correct drive. Plus, there are plenty of customers that won’t live in areas where they have good enough service to get that kind of throughput.
Beyond that, it doesn’t really address the issue. If the flash memory is wearing out, it’s being overwritten, so they aren’t keeping it anyway.
My guess is that the drive was never supposed to be written to that much in the first place. That’s a stupid amount of logging data, especially when it just gets overwritten and thrown out. My guess is that they dropped the ball on software quality and didn’t notice that they’d left some testing diagnostic logs active that should have been disable in the production release.
I don’t agree with Chris that this is something that could “just happen.” It happened because they were in a hurry and put too high a focus on features over quality. This was a major screw up, and it was obvious enough that it shouldn’t have made it into a production product.