[<img src="/images/portal_2_logo-150x150.jpg" alt="Portal 2 Logo" title="Portal 2 Logo" width="150" height="150" style="float: right" />](/images/portal_2_logo.jpg) Got my hands on Portal 2 and finished a run through the single player campaign. Was a *lot* of fun, the characters were bursting with humor and personality. Just like the first game, it was hard to stop playing. *Unlike* the first game, it's got some length, so I stayed up late a couple nights with my eyes glued to the television. I already want to play through it again to find any little things I my tired eyes may have missed.
I'm itching to give co-op a try, so if you happen to have it on xbox or care to drop by, let me know.
**Update:** Played some co-op with Jen, had fun navigating puzzles together :)
I'm a huge fan of [XBMC](http://www.xbmc.org/). My pc (currently running Ubuntu 10.04) has taken root in my
living room, piping all my movies and tv shows straight to my HDTV.
While my pc is set up as a DVR using [MythTV](http://www.mythtv.org) to record shows off my FIOS box, it tends to be a little unreliable, which can suck when it's time to catch up on Daily Show and Colbert episodes.
I've had [Transmission](http://www.transmissionbt.com/) set up for a while for all my torrenting needs, and
I've even written an [XBMC script to manage torrents](https://github.com/correl/Transmission-XBMC), so I got to looking for
tools to track tv show torrent rss feeds.
<!--more-->
My first stop was [TED](http://ted.nu/). TED worked well enough, but would occasionally hang.
Since it's a GUI java app running in the taskbar, it would require me to dig
out my mouse and break out of full screen XBMC to fiddle with it. I eventually
got tired of dealing with TED and went back to prodding Myth.
Recently I've been itching to reliably watch my shows again, so I checked around
for a simple command-line utility to track rss feeds and download torrents.
Finding none, I loaded up vim and threw together a python script to handle it
all for me.
I also have another, simple script from when I was using TED (or just manually
downloading shows) which looks at completed torrents, compares their names with
the folders in my TV directory, and moves the shows into them for XBMC to see.
A couple cron jobs and a few rss feeds later, and I've got all my shows
automatically delivered straight to XBMC for my lazy evening viewing pleasure.
Because Org Mode makes building and modifying an outline structure
like this so quick and easy, I usually build and modify the project
org document while planning it out with my team. Once done, I then
manually load that information into our issue tracker and get
underway. Occasionally I'll also update tags and progress status in
the org document as well as the project progresses, so I can use the
same document to plan subsequent development iterations.
** Organizing Notes and Code Exercises
More recently, I've been looking into various ways to get more
things organized with Org mode. I've been stepping through
[[http://sarabander.github.io/sicp/][Structure and Interpretation of Computer Programs]] with some other
folks from work, and discovered that Org mode was an ideal fit for
keeping my notes and exercise work together. The latter is neatly
managed by [[http://orgmode.org/worg/org-contrib/babel/intro.html][Babel]], which let me embed and edit source examples and
my excercise solutions right in the org document itself, and even
export them to one or more scheme files to load into my
interpreter.
** Exporting and Publishing Documents
Publishing my notes with org is also a breeze. I've published
project plans and proposals to PDF to share with colleagues, and
exported my [[https://github.com/correl/sicp][SICP notes]] to html and [[http://sicp.phoenixinquis.net/][dropped them into a site]] built
with [[http://jekyllrb.com/][Jekyll]]. Embedding graphs and diagrams into exported documents
using [[http://www.graphviz.org/][Graphviz]], [[http://www.mcternan.me.uk/mscgen/][Mscgen]], and [[http://plantuml.sourceforge.net/][PlantUML]] has also really helped with
putting together some great project plans and documentation. A lot of
great examples using those tools (and more!) can be found [[http://home.fnal.gov/~neilsen/notebook/orgExamples/org-examples.html][here]].
** Emacs Configuration
While learning all the cool things I could do with Org mode and Babel,
it was only natural I'd end up using it to reorganize my [[https://github.com/correl/dotfiles/tree/master/.emacs.d][Emacs
configuration]]. Up until that point, I'd been managing my configuration
in a single init.el file, plus a directory full of mode or
purpose-specific elisp files that I'd loop through and load. Inspired
primarily by the blog post, [[http://zeekat.nl/articles/making-emacs-work-for-me.html]["Making Emacs Work For Me"]], and later by
others such as [[http://pages.sachachua.com/.emacs.d/Sacha.html][Sacha Chua's Emacs configuration]], I got all my configs
neatly organized into a single org file that gets loaded on
startup. I've found it makes it far easier to keep track of what I've
got configured, and gives me a reason to document and organize things
neatly now that it's living a double life as a [[https://github.com/correl/dotfiles/blob/master/.emacs.d/emacs.org][published document]] on
GitHub. I've still got a directory lying around with autoloaded
scripts, but now it's simply reserved for [[https://github.com/correl/dotfiles/blob/master/.emacs.d/emacs.org#auto-loading-elisp-files][tinkering and sensitive
configuration]].
** Tracking Habits
Another great feature of Org mode that I've been taking advantage
of a lot more lately is the [[http://orgmode.org/manual/Agenda-Views.html][Agenda]]. By defining some org files as
being agenda files, Org mode can examine these files for TODO
entries, scheduled tasks, deadlines and more to build out useful
agenda views to get a quick handle on what needs to be done and
when. While at first I started by simply syncing down my google
calendars as org-files (using [[http://orgmode.org/worg/code/awk/ical2org.awk][ical2org.awk]]), I've started
managing TODO lists in a dedicated org file. By adding tasks to
this file, scheduling them, and setting deadlines, I've been doing
a much better job of keeping track of things I need to get done
and (even more importantly) /when/ I need to get them done.
Back in May, a coworker and I got the idea to start up a little
seminar after work every couple of weeks with the plan to set aside
some time to learn and discuss new ideas together, along with anyone
else who cared to join us.
** Learning Together
Over the past several months, we've read our way through the first
three chapters of the book, watched the [[http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-001-structure-and-interpretation-of-computer-programs-spring-2005/video-lectures/][related video lectures]], and
did (most of) the exercises.
Aside from being a great excuse to unwind with friends after work
(which it is!), it's proved to be a great way to get through the
material. Doing a section of a chapter every couple of weeks is an
easy goal to meet, and meeting up to discuss it becomes something to
look forward to. We all get to enjoy a sense of accomplishment in
learning stuff that can be daunting or difficult to set aside time for
alone.
The best part, by far, is getting different perspectives on the
material. Most of my learning tends to be solitary, so it's refreshing
to do it with a group. By reviewing the different concepts together,
we're able to gain insights and clarity we'd never manage on our
own. Even the simplest topics can spur interesting conversations.
** SICP
Our first adventure together so far has been the venerable [[http://mitpress.mit.edu/sicp/][Structure
and Interpretation of Computer Programs]]. This book had been on my todo
list for a long time, but never quite bubbled to the top. I'm glad to
have the opportunity to go through it in this format, since there's
plenty of time to let really get into the excercises and let the
lessons sink in.
SICP was originally an introductory textbook for MIT computer
programming courses. What sets it apart from most, though, is that it
doesn't focus so much on learning a particular programming language
(while the book does use and cover MIT Scheme) as it does on
identifying and abstracting out patterns common to most programming
problems. Because of that, the book is every bit as useful and
illuminating as ever, especially now that functional paradigms are
re-entering the spotlight and means of abstracting and composing
systems are as important as ever.
** What's next?
We've still got plenty of SICP left to get through. We've only just
gotten through Chapter 4, section 1, which has us building a scheme
interpreter *in* scheme, so there's plenty of fun left to be had
there.
We're also staring to do some smaller, lunchtime review meetings
following the evening discussions to catch up the folks that can't
make it. I may also try sneaking in some smaller material, like
# Gather highlights from the book and write a post summarizing my
# thoughts on it, and what I took away from it.
A few days before leaving work for a week and a half of flying and
cruising to escape frigid Pennsylvania, I came across a [[armstrong-oop][Joe Armstrong
quote]] during my regularly scheduled slacking off on twitter and Hacker
News. I'd come across a couple times before, only this time I noticed
it had a source link. This led me to discovering (and shortly
thereafter, buying) Peter Seibel's "[[http://www.codersatwork.com/][Coders at Work -- Reflections on
the Craft of Programming]]". I loaded it onto my nook, and off I went.
The book is essentially a collection of interviews with a series of
highly accomplished software developers. Each of them has their own
fascinating insights into the craft and its rich history.
While making my way through the book, I highlighted some excerpts
that, for one reason or another, resonated with me. I've organized and
elaborated on them below.
** DONE Incremental Changes
CLOSED: [2015-01-20 Tue 20:59]
<<fitzpatrick-increments>>
#+BEGIN_QUOTE
I've seen young programmers say, "Oh, shit, it doesn't work," and then
rewrite it all. Stop. Try to figure out what's going on. *Learn how to
write things incrementally so that at each stage you could verify it.*\\
-- Brad Fitzpatrick
#+END_QUOTE
I can remember doing this to myself when I was still relatively new to
coding (and even worse, before I discovered source control!). Some
subroutine or other would be misbehaving, and rather than picking it
apart and figuring out what it was I'd done wrong, I'd just blow it
away and attempt to write it fresh. While I /might/ be successful,
that likely depended on the issue being some sort of typo or missed
logic; if it was broken because I misunderstood something or had a bad
plan to begin with, rewriting it would only result in more broken
code, sometimes in more or different ways than before. I don't think
I've ever rewritten someone else's code without first at least getting
a firm understanding of it and what it was trying to accomplish, but
even then, breaking down changes piece by piece makes it all the
easier to maintain sanity.
I do still sometimes catch myself doing too much at once when building
a new feature or fixing a bug. I may have to fix a separate bug that's
in my way, or I may have to make several different changes in various
parts of the code. If I'm not careful, things can get out of hand
pretty quickly, and before I know it I have a blob of changes strewn
across the codebase in my working directory without a clear picture of
what's what. If something goes wrong, it can be pretty tough to sort
out which change broke things (or fixed them). Committing changes
often helps tremendously to avoid this sort of situation, and when I
catch myself going off the rails I try to find a stopping point and
split changes up into commits as soon as possible to regain
control. Related changes and fixes can always be squashed together
afterwards to keep things tidy.
** DONE Specifications & Documentation
CLOSED: [2015-01-20 Tue 20:59]
<<bloch-customers>>
#+BEGIN_QUOTE
*Many customers won't tell you a problem; they'll tell you a
solution.* A customer might say, for instance, "I need you to add
support for the following 17 attributes to this system. Then you have
to ask, 'Why? What are you going to do with the system? How do you
expect it to evolve?'" And so on. You go back and forth until you
figure out what all the customer really needs the software to
do. These are the use cases.\\
-- Joshua Bloch
#+END_QUOTE
Whether your customer is your customer, or your CEO, the point stands:
customers are /really bad/ at expressing what they want. It's hard to
blame them, though; analyzing what you really want and distilling it
into a clear specification is tough work. If your customer is your
boss, it can be intimidating to push back with questions like "Why?",
but if you can get those questions answered you'll end up with a
better product, a better /understanding/ of the product, and a happy
customer. The agile process of doing quick iterations to get tangible
results in front of them is a great way of getting the feedback and
answers you need.
<<armstrong-documentation>>
#+BEGIN_QUOTE
The code shows me what it /does/. It doesn't show me what it's
supposed to do. I think the code is the answer to a problem.
*If you don't have the spec or you don't have any documentation, you have to guess what the problem is from the answer. You might guess wrong.*\\
-- Joe Armstrong
#+END_QUOTE
Once you've got the definition of what you've got to build and how
it's got to work, it's extremely important that you get it
documented. Too often, I'm faced with code that's doing something in
some way that somebody, either a customer or a developer reading it,
takes issue with, and there's no documentation anywhere on why it's
doing what it's doing. What happens next is anybody's guess. Code
that's clear and conveys its intent is a good start towards avoiding
this sort of situation. Comments explaining intent help too, though
making sure they're kept up to date with the code can be
challenging. At the very least, I try to promote useful commit
messages explaining what the purpose of a change is, and reference a
ticket in our issue tracker which (hopefully) has a clear accounting
of the feature or bugfix that prompted it.
** DONE Pair Programming
CLOSED: [2015-01-20 Tue 21:03]
<<armstrong-pairing>>
#+BEGIN_QUOTE
... *if you don't know what you're doing then I think it can be very
helpful with someone who also doesn't know what they're doing.* If you
have one programmer who's better than the other one, then there's
probably benefit for the weaker programmer or the less-experienced
programmer to observe the other one. They're going to learn something
from that. But if the gap's too great then they won't learn, they'll
just sit there feeling stupid.\\
-- Joe Armstrong
#+END_QUOTE
Pairing isn't something I do much. At least, it's pretty rare that I
have someone sitting next to me as I code. I *do* involve peers while
I'm figuring out what I want to build as often as I can. The tougher
the problem, the more important it is, I think, to get as much
feedback and brainstorming in as possible. This way, everybody gets to
tackle the problem and learn together, and anyone's input, however
small it might seem, can be the key to the "a-ha" moment to figuring
out a solution.
** DONE Peer Review
CLOSED: [2015-01-25 Sun 22:44]
<<crockford-reading>>
#+BEGIN_QUOTE
*I think an hour of code reading is worth two weeks of QA.* It's just
a really effective way of removing errors. If you have someone who is
strong reading, then the novices around them are going to learn a lot
that they wouldn't be learning otherwise, and if you have a novice
reading, he's going to get a lot of really good advice.\\
-- Douglas Crockford
#+END_QUOTE
Just as important as designing the software as a team, I think, is
reviewing it as a team. In doing so, each member of the team has an
opportunity to understand /how/ the system has been implemented, and
to offer their suggestions and constructive criticisms. This helps the
team grow together, and results in a higher quality of code overall.
This benefits QA as well as the developers themselves for the next
time they find themselves in that particular bit of the system.
** DONE Object-Oriented Programming
CLOSED: [2015-01-20 Tue 20:59]
<<armstrong-oop>>
#+BEGIN_QUOTE
I think the lack of reusability comes in object-oriented languages,
not in functional languages.
*Because the problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.*\\
-- Joe Armstrong
#+END_QUOTE
A lot has been written on why OOP isn't the great thing it claims to
be, or was ever intended to be. Having grappled with it myself for
years, attempting to find ways to keep my code clean, concise and
extensible, I've more or less come to the same conclusion as Armstrong
in that coupling data structures with behaviour makes for a terrible
mess. Dividing the two led to a sort of moment of clarity; there was
no more confusion about what methods belong on what object. There was
simply the data, and the methods that act on it. I am still struggling
a bit, though, on how to bring this mindset to the PHP I maintain at
work. The language seems particularly ill-suited to managing complex
data structures (or even simple ones -- vectors and hashes are
bizarrely intertwined).
** DONE Writing
CLOSED: [2015-01-28 Wed 22:42]
<<bloch-writing>>
#+BEGIN_QUOTE
You should read /[Elements of Style]/ for two reasons: The first is
that a large part of every software engineer's job is writing
prose. *If you can't write precise, coherent, readable specs, nobody
is going to be able to use your stuff.* So anything that improves your
prose style is good. The second reason is that most of the ideas in
that book are also applicable to programs.\\
-- Joshua Bloch
#+END_QUOTE
<<crockford-writing>>
#+BEGIN_QUOTE
*My advice to everybody is pretty much the same, to read and write.*\\
...\\
Are you a good Java programmer, a good C programmer, or whatever? I
don't care. I just want to know that you know how to put an algorithm
together, you understand data structures, and you know how to document
it.\\
-- Douglas Crockford
#+END_QUOTE
<<knuth-writing>>
#+BEGIN_QUOTE
This is what literate programming is so great for --\\
*I can talk to myself. I can read my program a year later and know
exactly what I was thinking.*\\
-- Donald Knuth
#+END_QUOTE
The more I've program professionally, the clearer it is that writing
(and communication in general) is a very important skill to
develop. Whether it be writing documentation, putting together a
project plan, or whiteboarding and discussing something, clear and
concise communication skills are a must. Clarity in writing translates
into clarity in coding as well, in my opinion. Code that is short, to
the point, clear in its intention, making good use of structure and
wording (in the form of function and variable names) is far easier to
read and reason about than code that is disorganized and obtuse.
** DONE Knuth
CLOSED: [2015-01-28 Wed 22:42]
<<crockford-knuth>>
#+BEGIN_QUOTE
I tried to make familiarity with Knuth a hiring criteria, and I was
disappointed that I couldn't find enough people that had read him. In
my view,
*anybody who calls himself a professional programmer should have read
Knuth's books or at least should have copies of his books.*\\
-- Douglas Crockford
#+END_QUOTE
<<steele-knuth>>
#+BEGIN_QUOTE
... Knuth is really good at telling a story about code. When you read
your way through /The Art of Computer Programming/ and you read your
way through an algorithm, he's explained it to you and showed you some
applications and given you some exercises to work, and *you feel like
you've been led on a worthwhile journey.*\\
-- Guy Steele
#+END_QUOTE
<<norvig-knuth>>
#+BEGIN_QUOTE
At one point I had /[The Art of Computer Programming]/ as my monitor
stand because it was one of the biggest set of books I had, and it was
just the right height. That was nice because it was always there, and
I guess then I was more prone to use it as a reference because it was
right in front of me.\\
-- Peter Norvig
#+END_QUOTE
I haven't read any of Knuth's books yet, which is something I'll have
to rectify soon. I don't think I have the mathematical background
necessary to get through some of his stuff, but I expect it will be
rewarding nonetheless. I'm also intrigued by his concept of literate
programming, and I'm curious to learn more about TeX. I imagine I'll
be skimming through [[http://brokestream.com/tex-web.html][TeX: The Program]] pretty soon now that I've
I have a few computers I use on a daily basis, and I like to keep the
same emacs and shell configuration on all of them, along with my org
files and a handful of scripts. Since I'm sure other people have this
problem as well, I'll share what I'm doing so anyone can learn from
(or criticise) my solutions.
** Git for configuration and projects
I'm a software developer, so keeping things in git just makes sense
to me. I keep my org files in a privately hosted git repository, and
[[https://www.gnu.org/software/emacs/][Emacs]] and [[http://www.zsh.org/][Zsh]] configurations in a [[https://github.com/correl/dotfiles][public repo on github]]. My blog is
also hosted and published on github as well; I like having it cloned
to all my machines so I can work on drafts wherever I may be.
My [[https://github.com/correl/dotfiles/blob/master/.zshrc][.zshrc]] installs [[https://github.com/robbyrussell/oh-my-zsh][oh-my-zsh]] if it isn't installed already, and sets
up my shell theme, path, and some other environmental things.
My [[https://github.com/correl/dotfiles/blob/master/.emacs.d/emacs.org][Emacs configuration]] behaves similarly, making use of John
Wiegley's excellent [[https://github.com/jwiegley/use-package][use-package]] tool to ensure all my packages are
installed if they're not already there and configured the way I like
them.
All I have to do to get running on a new system is to install git,
emacs and zsh, clone my repo, symlink the files, and grab a cup of
tea while everything installs.
** Bittorrent sync for personal settings & books
For personal configuration that doesn't belong in and/or is too
sensitive to be in a public repo, I have a folder of dotfiles and
things that I sync between my machines using [[https://www.getsync.com/][Bittorrent Sync]]. The
dotfiles are arranged into directories by their purpose:
#+BEGIN_EXAMPLE
[correlr@reason:~/dotenv]
% tree -a -L 2
.
├── authinfo
│ └── .authinfo.gpg
├── bin
│ └── .bin
├── emacs
│ ├── .bbdb
│ └── .emacs.local.d
├── mail
│ ├── .gnus.el
│ ├── .signature
├── README.org
├── .sync
│ ├── Archive
│ ├── ID
│ ├── IgnoreList
│ └── StreamsList
├── tex
│ └── texmf
├── xmonad
│ └── .xmonad
└── zsh
└── .zshenv
#+END_EXAMPLE
This folder structure allows my configs to be easily installed using
[[https://www.gnu.org/software/stow/][GNU Stow]] from my =dotenv= folder:
: stow -vvS *
Running that command will, for each file in each of the directories,
create a symlink to it in my home folder if there isn't a file or
directory with that name there already.
Bittorrent sync also comes in handy for syncing my growing [[http://calibre-ebook.com/][Calibre]] ebook
collection, which outgrew my [[https://www.dropbox.com/][Dropbox]] account a while back.
The first thing I needed to do was describe my data structure. Leaning
on my experiences reading and working through [[https://www.google.com/url?sa%3Dt&rct%3Dj&q%3D&esrc%3Ds&source%3Dweb&cd%3D1&cad%3Drja&uact%3D8&ved%3D0CB8QFjAA&url%3Dhttps%253A%252F%252Fmitpress.mit.edu%252Fsicp%252F&ei%3DlH6gVau5OIGR-AG8j7yACQ&usg%3DAFQjCNHTCXQK7qN-kYibdy_MqRBWxlr8og&sig2%3DLu9WIhyuTJS92e8hxne0Aw&bvm%3Dbv.97653015,d.cWw][SICP]], I got to work
building a constructor function, and several accessors.
I decided to represent each node on a graph with an id, a list of
parent ids, and a group which will correspond to the branch on the
graph the commit belongs to.
#+begin_src emacs-lisp
(defun git-graph/make-node (id &optional parents group)
(list id parents group))
(defun git-graph/node-id (node)
(nth 0 node))
(defun git-graph/node-parents (node)
(nth 1 node))
(defun git-graph/node-group (node)
(nth 2 node))
#+end_src
*** Converting the structure to Graphviz
Now that I had my data structures sorted out, it was time to step
through them and generate the graphviz source that'd give me the
nice-looking graphs I was after.
The graph is constructed using the example above as a template. The
nodes are defined first, followed by the edges between them.
fetch data from a paginated JSON REST API. It worked, but it wasn't
too clean. In particular, the handling of the multiple pages and
concatenation of results was left up to the calling code. Ideally,
both of these concerns should be handled by the library, letting the
application focus on working with a full result set. Using Elm's
Tasks, we can achieve exactly that!
** What's a Task?
A [[http://package.elm-lang.org/packages/elm-lang/core/5.1.1/Task][Task]] is a data structure in Elm which represents an asynchronous
operation that may fail, which can be mapped and *chained*. What this
means is, we can create an action, transform it, and chain it with
additional actions, building up a complex series of things to do into
a single =Task=, which we can then package up into a [[http://package.elm-lang.org/packages/elm-lang/core/5.1.1/Platform-Cmd#Cmd][Cmd]] and hand to
the Elm runtime to perform. You can think of it like building up a
[[https://en.wikipedia.org/wiki/Futures_and_promises][Future or Promise]], setting up a sort of [[https://en.wikipedia.org/wiki/Callback_(computer_programming)][callback]] chain of mutations
and follow-up actions to be taken. The Elm runtime will work its way
through the chain and hand your application back the result in the
form of a =Msg=.
So, tasks sound great!
** Moving to Tasks
Just to get things rolling, let's quit using =Http.send=, and instead
prepare a simple =toTask= function leveraging the very handy
=Http.toTask=. This'll give us a place to start building up some more
complex behavior.
#+BEGIN_SRC elm
send :
(Result Http.Error (Response a) -> msg)
-> Request a
-> Cmd msg
send resultToMessage request =
toTask request
|> Task.attempt resultToMessage
toTask : Request a -> Task Http.Error (Response a)
toTask =
httpRequest >> Http.toTask
#+END_SRC
** Shifting the recursion
Now, for the fun bit. We want, when a request completes, to inspect
the result. If the task failed, we do nothing. If it succeeded, we
move on to checking the response. If we have a =Complete= response,
we're done. If we do not, we want to build another task for the next
request, and start a new iteration on that.
All that needs to be done here is to chain our response handling using
=Task.andThen=, and either recurse to continue the chain with the next
=Task=, or wrap up the final results with =Task.succeed=!
#+BEGIN_SRC elm
recurse :
Task Http.Error (Response a)
-> Task Http.Error (Response a)
recurse =
Task.andThen
(\response ->
case response of
Partial request _ ->
httpRequest request
|> Http.toTask
|> recurse
Complete _ ->
Task.succeed response
)
#+END_SRC
That wasn't so bad. The function recursion almost seems like cheating:
I'm able to build up a whole chain of requests /based/ on the results
without actually /having/ the results yet! The =Task= lets us define a
complete plan for what to do with the results, using what we know
about the data structures flowing through to make decisions and tack
on additional things to do.
** Accumulating results
There's just one thing left to do: we're not accumulating results yet.
We're just handing off the results of the final request, which isn't
too helpful to the caller. We're also still returning our Response
structure, which is no longer necessary, since we're not bothering
with returning incomplete requests anymore.
Cleaning up the types is pretty easy. It's just a matter of switching
out some instances of =Response a= with =List a= in our type
declarations...
#+BEGIN_SRC elm
send :
(Result Http.Error (List a) -> msg)
-> Request a
-> Cmd msg
toTask : Request a -> Task Http.Error (List a)
recurse :
Task Http.Error (Response a)
-> Task Http.Error (List a)
#+END_SRC
...then changing our =Complete= case to return the actual items:
#+BEGIN_SRC elm
Complete xs ->
Task.succeed xs
#+END_SRC
The final step, then, is to accumulate the results. Turns out this is
*super* easy. We already have an =update= function that combines two
responses, so we can map /that/ over our next request task so that it
incorporates the previous request's results!
#+BEGIN_SRC elm
Partial request _ ->
httpRequest request
|> Http.toTask
|> Task.map (update response)
|> recurse
#+END_SRC
** Tidying up
Things are tied up pretty neatly, now! Calling code no longer needs to
care whether the JSON endpoints its calling paginate their results,
they'll receive everything they asked for as though it were a single
request. Implementation details like the =Response= structure,
=update= method, and =httpRequest= no longer need to be exposed.
=toTask= can be exposed now as a convenience to anyone who wants to
perform further chaining on their calls.
Now that there's a cleaner interface to the module, the example app is
So, there we have it! Feel free to check out the my complete
=Paginated= library on the [[http://package.elm-lang.org/packages/correl/elm-paginated/latest][Elm package index]], or on [[https://github.com/correl/elm-paginated][GitHub]]. Hopefully
you'll find it or this post useful. I'm still finding my way around
This is a recurring schedule item that runs every weekday at 6:30. We
can tell this by looking at the =localtime= field. From the
documentation on [[https://www.developers.meethue.com/documentation/datatypes-and-time-patterns#16_time_patterns][time patterns]], we can see that it's a recurring time
pattern specifying days of the week as a bitmask, and a time (6:30).
It's hard to feel positive about this year's [[https://www.glaad.org/tdov][Transgender Day of Visibility]]. On
the one hand, trans visibility is extremely important. It's because of out trans
people that I was able to understand my own identity. The more cis people really
see, talk to, and come to understand trans people, the easier it will be for
them to understand that we're, well, just /people/. Transitioning is a
/beautiful/ thing. Look at any set of photos trans people share, and you'll see
that they're not just happier, but more vibrant, more full of life, and so very
genuinely themselves! This is what folks need to see more of, and what I think
this day is meant to be about. Unfortunately, a lot of what folks are seeing
nowadays isn't trans people thriving, it's misinformation and vitriol. This
isn't at all a new phenomenon, but [[https://www.vox.com/first-person/22977970/anti-trans-legislation-texas-idaho][in recent years it's gotten overwhelming]].
This year, like last year, has brought with it a [[https://freedomforallamericans.org/legislative-tracker/anti-transgender-legislation/][record-breaking amount of
anti-trans legislation across the majority of states in the country]]. These bills
are targeting trans youths by banning them from playing sports with their peers,
forbidding any discussion about gender or queer identities in their classrooms,
requiring that trusted teachers and other school staff out them to families, and
restricting and even outlawing their healthcare. Book bans have been sweeping
the nation, intent on removing anything they consider unpleasant or
uncomfortable, which has mostly amounted to anything discussing gender,
sexuality, or race. There is a /constant/ stream of vitriol flowing across
social media and news outlets sowing outrage by [[https://www.theguardian.com/commentisfree/2018/apr/19/anti-trans-rhetoric-homophobia-trans-rights][recycling old homophobic
rhetoric]] as they label trans people as predators, anyone supporting us as
"groomers", and claim we're forcing children into life-altering surgeries. Trans
kids [[https://www.washingtonpost.com/dc-md-va/2021/04/22/transgender-child-sports-treatments/][do not get surgeries]], but laws are being pushed and passed banning them
anyway, though always with a note that [[https://www.them.us/story/trans-health-care-attacks-target-intersex-people-too][those restrictions aren't extended to
intersex kids]], who continue to be operated upon to make their bodies conform to
Trans kids and trans adults alike, whether they're in states that are actively
arguing or passing these bills, [[https://www.thedailybeast.com/we-trans-people-will-never-surrender-but-fighting-bigots-is-exhausting?s=09&source=twitter&utm_source=pocket_mylist&via=desktop][are having to endure watching this all happen]].
Watching their identities, their /existence/ be debated, questioned, demonized,
and ridiculed. We're having to watch this all unfold, and it really feels like
[[https://truthout.org/audio/trans-youth-are-facing-right-wing-attacks-and-a-solidarity-shortage/][few people are actively defending us or standing up to this torrent of hate]].
Most of these bills aren't even getting much news coverage, and [[https://www.teenvogue.com/story/trans-people-right-wing-media?s=09&utm_source=pocket_mylist][those that are
often aren't in our favor]], framing the issues as [[https://www.bbc.com/news/uk-43255878][divisive]] or [[https://www.nbcnews.com/nbc-out/out-news/trans-swimmer-lia-thomas-speaks-scrutiny-controversy-rcna18503][controversial]]. Even
Florida's so-called [[https://www.flsenate.gov/Session/Bill/2022/1557]["Don't Say Gay" bill]] is framed first and foremost as an
attack on gay rights (which it certainly is), but leaving the very deliberate
certainly didn't hide it, [[https://www.axios.com/dont-say-gay-bill-desantis-578593fc-5d6e-4098-b69a-c838b017ce24.html][claiming its intent is to squash so-called "woke
Ours is a community molded by trauma and loss. Our history, vibrant as it is,
has been largely [[https://historycollection.com/16-remarkable-historical-figures-who-were-transgender/][hidden from us]] or [[https://www.teenvogue.com/story/lgbtq-institute-in-germany-was-burned-down-by-nazis][outright destroyed]]. Nearly an entire
generation of queer people was [[https://read.dukeupress.edu/tsq/article/7/4/527/168493/Trans-in-a-Time-of-HIV-AIDS?utm_source=blog&utm_medium=post&utm_campaign=j-TSQ_7-4_Feb2021][lost to hate and apathy during the AIDS epidemic]].
Many [[https://www.hrc.org/resources/fatal-violence-against-the-transgender-and-gender-non-conforming-community-in-2022][continue to be lost every year to violence]]. Mostly trans women of color,
losing their lives to hate in the rising tide of racism, misogyny, homophobia
and transphobia. We likely lose far more than we know as crimes go unreported or
misreported, as they tend to be, when trans folks [[https://chicago.suntimes.com/2021/11/29/22807775/what-i-learned-about-news-media-law-enforcement-transgender-murders-morgan-sherm-op-ed][get misgendered in death]]. This
isn't how it's supposed to be. Discovering and living as who we truly are is one
of the most joyful things in life. Being ourselves, /really sharing ourselves/
with the people we love is such a wonderful, vibrant feeling. That more and more
people are able to learn about the beautiful spectrums of identities is an
amazing thing. We've got greater resources and representation now than ever
before.
I do not believe that all of this hatred, all of these laws, any of it will win
On this Transgender Day of Visibility, I feel it's important that we're not
merely seen, but seen fully. I hope that people will see our joy and our
strength and our fierce love of authentic life. I also hope that people will see
our pain, and find it in themselves to offer not just performative displays of
support but real empathy and action. We're out here showing you who we are and
what we can be. Please show /us/ who /you/ are and what we mean to you.
And, for the love of everything, [[https://www.gamespot.com/articles/jk-rowlings-anti-transgender-stance-and-hogwarts-legacy/1100-6501632/?s=09&utm_source=pocket_mylist][please leave Harry Potter in the past]].
- ML5 ([[http://www.cs.cmu.edu/~tom7/papers/modal-types-for-mobile-code.pdf][Modal Types for Mobile Code]])
- Function-Passing ([[https://infoscience.epfl.ch/record/230304/files/fp-jfp.pdf][A Programming Model and Foundation for
Lineage-Based Distributed Computation]])
- Birds-eye view of fringe projects
- [[http://christophermeiklejohn.com/publications/hotedge-2018-containers-preprint.pdf][Verifying Interfaces Between Container-Based Components (Or... A
Type System By Any Other Name)]]
- Rejected 😅
- Statically ensuring that microservices satisfy invariants -
Adelbert Chang
- Statically ensuring that functions on replicated data are
- The player is always the point, where it can be hit it is a
rectangle in relation to the point
- The NES does not have a random number generator (3 optionsin
increasingorder of stupidity)
- Tetris: Do it with math (16-bit fibonacci linear feedback shift register)
- FF: (Nasir Gebelli, contractor) a lookup table of 256 random numbers
- Contra: a single global 8-bit value that increments by 7 whenever
the game is idle
- The demo uses prerecorded /actions/, it can play out differently
- Saving progress
- Password systems
- DQ2 in Japan used a "poem"
- FDS
- Shut down due to ease of piracy
- Battery-backed memory
- "Hold reset" - power issues could lead to corruption
- Write multiple times with CRC
- "Embrace the stupid"
- Is it close enough, and much more efficient?
** Day Two
*** Duolingo: Microservice Journey :ATTACH:
- Speaker :: Max Blaze
- first microservice in 2016
- making many changes to the product, many releases per day
- centralized dashboards/logging
- Terraform for infrastructure as code
- First microservice in ECS in 2017-2018
- Why move to microservices?
- Scalability problem with teams
- Slow and difficult with a monolith
- Desire to use multiple languages (monolith in python, wanting to
incorporate scala, nodejs, ...)
- Flexibility
- Velocity
- Reliability
- Cost savings
- What to carveout first?
- Not the largest chunk
- Start with a small but impactful feature
- move up in size, complexity, and risk
- consider dependencies
- First thing was the reminder service 🦉🗡
- Using circuit breakers to make microservices independent
- Why docker?
- Kind of the only game in town
- Why docker with ECS?
- task auto scaling
- task-level IAM
- needs to be supported by the aws client library (e.g., boto)
- cloudwatch metrics
- dynamic alb targets
- manageability
- Microservice abstractions at Duolingo
- Abstracted into terraform modules
- Web service (internal or external)
- load balancer and route 53
- worker service (daemon or cron)
- sqs and event-based scheduling
- data store
- monitoring
- CI/CD
- Github -> Jenkins -> ECR/Terraform (S3) -> ECS
- Load balancing
- ALB vs. CLBs
- ALBs more strict when handling malformed requests (defaults to
HTTP/2 (headers always passed in lowercase)
- Differences in cloudwatch metrics (continuous in CLBs, discrete
in ALBs)
- Standardizing microservices
- develop a common naming scheme for repos and services
- autogenerate as much of the initial service as possible (?)
- move core functionality to shared base libraries
- *provide standard alarms and dashboards*
- /periodically review microservices for consistency and quality/
- Monitoring microservices
- includes load balancer errors
- pagerduty integration
- includes links to playbooks
- emergency pages, warnings go to email
- schedules and rotations are managed by terraform
- Grading microservices
- Cost reduction options
- Cluster
- instance type
- pricing options
- auto scale
- add/remove AZs
- using "Spot" (spotinst) to save money on ephermeral cluster instances
- drains ECS services
- spreads capacity across AZs
- bills on % of savings
- ECS allows oversubscription of memory, *WE DO NOT RECOMMEND THIS*
- AWS Limits
- EC2 has a hard-coded maximum # of packets(1024/s) sent to an amazon
provided dns server
- Nitro is not caching DNS requests where Xen was
*** Mentoring the way to a diverse and inclusive workplace
- Speaker :: Alexandra Millatmal
- Twitter :: @halfghaninNE
- Email :: hello@alexandramillatmal.com
- Slides :: http://alexandramillatmal.com/talks
- Developer at Newsela (Education tech company promoting literacy in
the 2nd-12th grade space)
- The tenants of mentorship are similar to the tenants of inclusive
companies
- Mentorship doesn't work for folks of under-represented backgrounds
- Finding very similar entry level jobs, very homogenous teams, no time to
support learning
- Skill-building and diversity appear related
- What if strong mentorship /begets/ diversity & inclusion?
- Good mentorship / good diversity
- Supporting and retaining engineers with underrepresented identities
- be welcoming and inclusive in the recruiting process
- post openings on "key values"
- company must list their values
- conveys people and tech values on the same level
- candidates filter jobs based on their prioritized values
- referrals can cut both ways
- can increase the homogenous nature of the workplace
- maybe direct referall bonuses into donations to inclusive
groups that attract and talent
- affinity groups
- caucuses across departments working with management
- standardized review process
- stanford research into review proceseses
- men overrepresented in the higher tier, women in the middle
- standardizing removed the bias
- clear definitions of roles and responsibilities
- do they have ownership
- are these employees getting a seat at the table for decisions
- representation in leadership
- are there people there that look like me?
- is there a clear model of advancement? allyship in leadership?
- investment in internal & external advocacy
- signals that copmanies understand the systematic barriers to
inclusion and diversity
- sponsorship - "super mentorship"
- stark differences in valuation of the above bulletpoints between
underrepresented groups and well-represented groups (women vs men,
lgbt+ vs straight men)
- Supporting and leveling up junior engineers
- recruiting process / relationships
- the candidate should be receiving feedback on their performance in
the recruting process!
- Gives them constructive advice and context
- apprenticeships and clearly defined entry-level positions
- is there a clear path for growth?
- clear and structured onboarding
- please do not make their point person a person they report to
- need to go to information from somone that doesn't involve
company politics
- information should exist outside of leads/managers heads
- define onboarding procedures in a shared space
- learning groups
- space to ask questions and demonstrate leadership, particularly
with peer-to-peer learning
- formalized mentorship
- ensure that compensated time is resulting in measurable goals
for the junior engineer
- recommend them for opportunities
- standardized review process
- reframe junior-ness as an opportunity, not a deficit of skill
- Mentorship with diversity and inclusion in mind
- this work is really hard
- easy to fall into a pattern of saying you're making progress
without measuring to make sure that's the case
- intent is only half of the picture
- the other half is /sacrifice/ to make real, measured investments
- mentorship should begin during the interviews
- [[https://www.wired.com/story/for-young-female-coders-internship-interviews-can-be-toxic/][wired article on young women's interview experiences]] (today?!)
- place serious focus on developing mentors
- forces mentees to manage /up/
- mentorship is a two-way street
- have you ever seen someone become a better collaborator after
mentoring a junior engineer?
- mentorship is leadership and it's learned
- have clear growth objectives for the mentor and the mentee
- mentorship should happen on compensated time
- rethink the peer group
- slack channel for juniors spread across different offices
- wasn't an organic space to share knowledge
- a black junior woman engineer's peers aren't just other black
employees, or women, or other limited groups
- What's the value to the company?
- make a business case for mentorship
- that will drive diversity and inclusion
- mentorship can
- build brand halo among candidates
- distribute management responsibilities
- build its own workforce
- distributes business knowledge working on real business projects
- fosters relationship building and belonging
- practices wielding expertise, fosters bonding over work
microcontrollers. I did some research and settled on the popular [[https://en.wikipedia.org/wiki/ESP8266][ESP8266]] series
of microcontrollers, and found myself a set of WeMos D1 mini clones with
built-in micro USB connectors ([[https://www.amazon.com/gp/product/B081PX9YFV][$3 USD each on Amazon]]). I also snagged myself a
heavy-duty looking [[https://en.wikipedia.org/wiki/Reed_switch][reed switch]] to monitor when the door is closed ([[https://www.amazon.com/Magnetic-Contacts-Shutter-Adjustable-Bracket/dp/B07ZBT28L8][$17 USD on
Amazon]]), and a pack of 3 volt DC single-channel optocoupler relays ([[https://www.amazon.com/Cermant-Channel-Driver-Module-Optocoupler/dp/B0B4MS62X6][$5 USD
each on Amazon]]). I chose single-channel as I have only one door, getting
modules with more than one channel could make it easier to hook everything up if
you have more. Because this is my first electronics project, I also grabbed
myself an [[https://www.amazon.com/gp/product/B073ZC68QG][electronics kit]] with a breadboard, jumper wires, and a bunch of fun
components to fiddle around with. I tacked on some [[https://www.amazon.com/gp/product/B071H25C43][USB cables]] and [[https://www.amazon.com/gp/product/B0794WT57Y][power bricks]]
controller. After looking at the [[https://www.arduino.cc/en/software/][Arduino IDE]] and [[https://www.nodemcu.com/index_en.html][NodeMcu]] as possible development
options, I settled on using [[https://esphome.io/][ESPHome]] as it is super simple to set up (Arduino
coding looks fun, but I'll get everything I need just using some [[https://en.wikipedia.org/wiki/YAML][YAML]]
configuration) and it integrates super easily with [[https://www.home-assistant.io/][Home Assistant]] (the platform
I use for all of my home automation). I was able to get up and running just by
[[https://esphome.io/guides/installing_esphome.html][installing the ESPHome CLI tool]] and tossing some configuration together.
#+caption: Clockwise from the bottom: The ESP8266 Wemos D1 mini clone wired into the breadboard, the reed switch plate, its accommpanying magnet, and the relay switch.
Our home WiFi coverage is ... not great. We're getting by with the old router
from our ISP, and while it mostly works alright, the coverage isn't fantastic
everywhere. The upstairs rooms furthest from the router sometimes don't get much
signal at all. Updating that with new WiFi mesh devices might be awesome, but
I'd also like to have the speed and reliability of a wired connection.
Sadly, our house is not wired up with ethernet. It /is/, however, wired up with
coax to every room from our cable installation. We're no longer using that for
television, so why not use it for our network? Enter [[https://en.wikipedia.org/wiki/Multimedia_over_Coax_Alliance][MoCA]]. MoCA is a standard
for passing network traffic over a network of coaxial cables. With a handful of
MoCA 2.0 adapters, I can ensure each room in the house that needs a reliable
connection with speeds of up to 2.5Gbps.
#+caption: MoCA adapters
#+attr_html: :alt A pair of black rectangular adapters. One end of each has a coaxial port, and the other end has an ethernet port. A pair of lights on each device indicate power and coax signal.
[[file:images/moca-adapters.jpg]]
Setup was pretty simple: Connect an adapter between a coax line and one of the
router's available ethernet ports, and another adapter between a coax line and a
PC. Once two or more adapters are on the coax cable network, they light up to
let you know they're talking to each other. The connection to my second floor
home office worked great, and I confirmed that I could get 1Gbps between two of
my devices over the coax connection (matching the best speed their ethernet
ports could muster).
Other rooms, unfortunately, didn't fare as well. I just could not seem to get a
reliable signal in one of the bedrooms, and another wouldn't get anything at all
(it was splitting the signal from the first one). A little bit of research led
me to a pretty important thing to note when setting up such a network: not all
coaxial splitters are the same. It turned out my office was using a pretty new
splitter that was connected directly to the cable coming from the router. All of
the other cables in the house, however, were passing through some pretty old
ones.
#+caption: The old coax splitter, supporting up to 1Ghz
#+attr_html: :alt An aged and weathered coaxial splitter with one input and four outputs, labeled as supporting up to 1000Mhz
[[file:images/coax-splitter-old.jpg]]
Coax splitters are rated for specific frequency ranges. Signals outside of those
frequencies are effectively /filtered out/. To get the full benefit of MoCA 2.5,
any splitters in the network need to support up to 1675Mhz. Also, any splitters
that live outside and exposed to weather conditions may lose signal strength
over time due to oxidation and other factors. It just so happens that the main
splitter for my house is quite old, lives on the outside wall, and is rated for
only up to 1000Mhz. /Whoops/. Replacing that (and a couple other old ones I
found in the house) cleared everything up, and now all my connections are
working just fine! For the couple of rooms that have a handful of ethernet
devices (my office, and the living room entertainment center), I got a pair of
inexpensive 5-port ethernet switches to get everything linked up to the
adapters.
#+caption: A new coax splitter supporting up to 2.4Ghz
#+attr_html: :alt A brand new coaxial splitter with one input and four outputs, labeled as supporting up to 2.4Ghz
[[file:images/coax-splitter-new.jpg]]
I'll still want to upgrade the WiFi at some point, but at least now our devices
that need strong connections the most have just what they need. I no longer have
to worry about the WiFi signal dropping when I'm working in my office, and the
living room can play high-definition media off my home server without any
trouble at all.
Now if I could just get the cat to stop chewing on the cables...