A couple months ago, I made the switch from Vim to Emacs. However, I’m using Evil mode, so I still have all the nice Vim keybindings.
One of my biggest complaints about Emacs is ergonomics – “Emacs Pinky” is a legitimate concern:
One solution is getting foot pedals or a fancy ergonomic keyboard like the Ergodox or Keyboardio, but I didn’t want to commit to the investment just yet. Thus, PinkyCtrls was born! I forked an existing project that turned the space key into a control key if you held it, but worked as space if you tapped it (Space2Ctrl). However, with Space2Ctrl I found that I generally mistyped spaces a lot, since I tend to mash the space key soon after typing words. So I decided to shove the functionality onto the Caps Lock & Minus key (where the quote key is on QWERTY layouts).
After about 2 weeks of using this mod, I find it uncomfortable to use the normal control keys, and I have no problems with all the finger stress when using Emacs.
Note: The code in my PinkyCtrls repo is currently pretty ugly… I may clean it up eventually, but for now it does the job. If you want to fork PinkyCtrls to work on another layout or with other keys, let me know and I’ll show you what you need to change!
At WearHacks Toronto 2016, we were trying to think of project ideas, but nothing came to mind that we all agreed on. So Adrian, Austin and I went out for coffee to take a mental break. While at the coffee shop, we were talking about all the types of light humans are blind to. Wouldn’t it be amazing if we could the whole spectrum of light, including infra red and ultra violet? … maybe we can!
When we got back to the Bitmaker offices (where the hackathon was held), we checked out the available hardware to see if there were any IR cameras. Unfortunately, the only IR camera we could find was on the Kinect, and it’s IR camera only has intensity (not spectrum data). So we made due with what we had and decided to use both the IR and RGB cameras of the Kinect to render both layers on top of each other.
Josh, Austin and I worked on the translation algorithm while Kulraj and Nick created an SFML GUI. The colorspace transformation algorithm we ended up designing and implemented was to shift the visible light by 10 degrees (hue), then shift the IR spectrum into the first 10 degrees of red hue. Essentially, we squeezed the visible light from the non-IR camera so there was no red left, then put a red layer on top of that image to display the infra red data.
The hardware turned out to be pretty difficult to overlay. The IR camera and visible light cameras had significantly different resolutions and aspect ratios. So we had to program some magic to translate the IR data to fit on top of the RGB data. Because of this, the IR layer was slightly offset from where it should have been. For instance, a slight red glow would appear around our bodies (from our body heat), but it would be shifted one side, since the camera perspectives were not identical.
You can check out the project here: InfraViewer. But be warned, the code base is pretty messy!
One night a couple weeks ago, I rigged up a basic synthesizer which generates an audio signal for 3.5mm jacks with just a couple transistors, resistors and capacitors.
To generate the signal, I used a simple flip flop oscillator. There are two methods to change the frequency (pitch) of the signal: a) changing the capacitance of one (or both) sides of the oscillator. b) changing the flow of electricity (in this case, via a potentiometer).
In this circuit, I decided to use both methods just for fun. When pressing one of the pads (which were just hacked together with some tape and aluminium foil), one is able to change the amount of capacitors in series, which results in a lower frequency due to the longer charge time. The potentiometer acts as a wobble nob by changing the voltage passing across the capacitors.