From the mouths of babes

So a few months back, I mentioned that I was joining something called OWL, a church-based sexuality class. (I love being Unitarian and being able to say things like that.)

Another couple, Shari and Jeff, have been reading a book called “It’s Perfectly Normal” with their son London, age 11. The book is all about sex, sexuality, growing up, etc. I haven’t read it yet, but they told us it’s an awesome book.

Last week, our class topic was Sexual Diversity, and Shari told us about London’s reactions when they had been reading this book and had gotten to the section on gays and lesbians.

“I don’t understand,” he said, when they got to the part about gays being discriminated against. “I mean, they don’t go out and do it in public, do they?”

“No, honey, they don’t,” Shari assured him.

“Well, then why does it matter what they do at home? Why does anybody else even care?”

Wow. Just… wow. There are plenty of adults who have got a lot to learn from this kid.

Reminds me of a church service a year or two ago, when our minister was away and a member of the congregation led the service and gave the sermon. This was someone who had taught in the R.E. (religious education — basically Sunday school) program, relating the experience of what they called “the lesson from Hell”, a.k.a. the lesson about Hell.

Much of the Unitarian R.E. program involves teaching the kids about other religions, so they’ll have some idea of what’s going on when they get out into the world. This particular lesson involved trying to teach a bunch of Unitarian born-and-bred kids about the Christian idea of Hell — a lesson the teachers always dreaded.

But they took a deep breath, went into the class, and explained all about Heaven and Hell. And they got nothing but blank stares in return.

Afterward, the teachers discussed it. “The kids don’t understand this Hell thing,” they said. “We need to explain it again, because they’ll need to understand it. It’ll be all around them their entire lives.” So they attacked the topic again the next Sunday. And got more blank stares.

Finally, one of the kids raised their hand and said, “I don’t understand. If God loves everyone, why would he send anyone to Hell?”

And the teachers exchanged looks of delight. The kids did understand.

Diff, part 19: De-dup by length?

Today in the diff series, I’ll start in on what I think will be the last major optimization in our LCS algorithm.

Recall that we may have multiple same-length common subsequences in our “best CSes so far” pool. For example, we may have [0,0] [1,2] and [0,0] [2,1]. We’ve been keeping both because the next match might be [2,3], or [3,2], or maybe even [128,2]. We can’t throw away any CS that we might reasonably be able to build upon later (unless we can make an ironclad guarantee that it’s expendable).

Also recall that, when we generate a new CS, we do so by adding a new match onto an existing CS; specifically, the longest one that we can append our new match to. When we look for an existing CS to extend, the only thing we care about is: for the given match, will this CSk give us the longest CSnew that we can possibly generate?

We don’t actually care whether CSk ends in [1,2] or [2,1], as long as we can guarantee that when we append the current match — be it [2,3], [3,2], [3,3], or whatever else — we’ll end up with the longest CSnew we can get. Anywhere and everywhere along the way, we can throw away as many CSes as we want (just like we did yesterday), as long as we never compromise that guarantee. Never throw a CS away unless you can prove it has less growth potential than something else you already have; otherwise, anything goes.

These two properties — length is all-important, yet we keep multiple CSes of the same length — seem to be in a bit of tension with each other. Which suggests an intriguing question: can we get to the point where length alone is a good enough guarantee? Can we get to the point where we hang onto just one length-1 CS, one length-2 CS, one length-3 CS, etc., at a time?

If that were possible, it would mean that we could make a big optimization to our fourth loop — the one where we call Expendable to compare our CSnew to each and every CSk, so we can see if there’s anything (including possibly CSnew) that we can throw away. That fourth loop could, in fact, stop being a loop. It could become a single call to Expendable, comparing CSnew to the CSk of the same length. (Of course, if CSnew is the longest CS so far, then there would be no CSk of the same length, and we wouldn’t need to call Expendable at all.) And that call to Expendable would always have a definitive result: either you throw away CSk, or you throw away CSnew. Only one CS of any given length would be left standing at any given time.

A worthy optimization, to be sure. But is it doable? We would have to throw away one of those CSes-of-the-same-length, but Expendable([0,0] [1,2], [0,0] [2,1]) returns nil, meaning we need both CSes at one time or another. How could we reconcile this?

By siezing on that “at one time or another”, and making this into a timing issue. Then we can control the timing, by controlling the way we iterate. How, and in what order, do we generate our matches?

We would be home free if we could generate our matches in this order:

  • [0,0]
  • [2,1]
  • Every possible match that could extend [2,1] but not [1,2]
  • [1,2]

Or perhaps:

  • [0,0]
  • [1,2]
  • Every possible match that could extend [1,2] but not [2,1]
  • [2,1]

As long as our algorithm can consistently return matches in one of these orders (and can be consistent about which one), then when we get to the point where [0,0] [1,2] and [0,0] [2,1] are our CSnew and CSk (in whichever order), we know with absolute certainty that we can throw the old one away, replace it with the new one, and be sure that we’re still going to find the longest possible CS when we’re done.

So how do we handle the third bullet point in either of the above orderings? That third bullet is kind of the “and then a miracle occurs” stage. Next time, I’ll start to nail down what “every possible match that could extend x but not y” really means, and how we can possibly hope to pull it off.

Life’s little mystery #74,512: Scented cat litter

So what’s the idea behind cat litter with perfume in it? I picked some up by mistake months ago, and still have most of it left, ’cause it makes the litterbox smell worse than before I changed it. Maybe it’s just me, but I don’t really want my cats’ butts to smell like some little old lady’s idea of appropriate (cough) fragrance (wheeze) for Sunday morning Baptist church service (gag). Thanks, but no thanks.

Diff, part 18: Optimizing the third loop

Our diff series continues, once again focusing on optimization.


On Monday, I sketched out our overall algorithm to date. Last Friday, I showed an Expendable function for eliminating common subsequences that aren’t going anywhere. This time, I’ll look at an interaction between those two pieces.

Suppose we have three CSes so far:

  • [1,1]
  • [1,1] [2,2]
  • [1,1] [2,2] [3,3]
  • [1,1] [2,2] [3,3] [4,4]

And let’s say the next match in our input is [5,5]. Our algorithm says to try appending the [5,5] to each of the existing CSes, then use Expendable to compare the new CS to existing CSes and trim things back. So we would come up with:

  • [1,1] [5,5], which gets eliminated because Expendable([1,1] [2,2], [1,1] [5,5]) = [1,1] [5,5]. (See Expendable scenario #10.)
  • [1,1] [2,2] [5,5], which gets eliminated because Expendable([1,1] [2,2] [3,3], [1,1] [2,2] [5,5]) = [1,1] [2,2] [5,5]. (See scenario #10.)
  • [1,1] [2,2] [3,3] [5,5], which gets eliminated because Expendable([1,1] [2,2] [3,3] [4,4], [1,1] [2,2] [3,3] [5,5]) = [1,1] [2,2] [3,3] [5,5]. (See scenario #10.)
  • [1,1] [2,2] [3,3] [4,4] [5,5], which we keep.

Our first four tries wound up getting thrown away, and it looks like a pattern may be forming. But it’s hard to tell whether the pattern is related to our specific inputs, or if it’s true in general. Will the first few CSnews we generate always be wasted effort?

Let’s look at it a different way, one that doesn’t depend so much on the particular inputs and the current state. Our algorithm-to-date uses Expendable in a specific way: it always passes an existing CSi as the first parameter to Expendable, and a proposed CSnew as the second parameter. But we’re not restricted to using Expendable this way; we can actually call it with any two CSes, including two proposed CSnews. That means we can compare our four new CSes to each other, and anything that drops out is, by definition, less worthy. If we never bother to pass those less-worthy CSes on to the rest of the algorithm, we know, from the way we defined Expendable, that we won’t be hurting our chances of finding the longest possible CS.

There are several different combinations of ways we could call Expendable on our four CSnews, but let’s just call it with successive pairs and see what happens:

  • Expendable([5,5], [1,1] [5,5]) = [5,5]. (See scenario #5.)
  • Expendable([1,1] [5,5], [1,1] [2,2] [5,5]) = [1,1] [5,5]. (See scenario #5.)
  • Expendable([1,1] [2,2] [5,5], [1,1] [2,2] [3,3] [5,5]) = [1,1] [2,2] [5,5]. (See scenario #5.)
  • Expendable([1,1] [2,2] [3,3] [5,5], [1,1] [2,2] [3,3] [4,4] [5,5]) = [1,1] [2,2] [3,3] [5,5]. (See scenario #5.)

So of our four CSnews, three turned out to be expendable with regards to each other. (This is the same thing we saw above, but this time we’ve eliminated one variable — we’re just looking at the CSnews; the outcome no longer depends on what CSes we already happened to have on tap.) And there seems to be a pattern here: scenario #5 keeps showing up. Let’s take a closer look at scenario #5:

# Len AI BI CS1 CS2 Make CS1 win Make CS2 win Exp
5 < = = [3,3] [0,0] [3,3] None possible EOF 1

What does this mean in English? It means that if two CSes are different lengths, but the last match is the same in both, the shorter CS can be thrown away.

This is an appealing rule, because it immediately eliminates obvious holes. But it also works in cases without obvious holes: if, for example, we have [0,3] [1,4] [5,5] and [2,0] [3,1] [4,2] [5,5], we have no obvious holes, but Expendable still tells us, with confidence, that we can throw the shorter one away.

This has a very practical application to our algorithm: we don’t need to append our new match to every one of our extant CSes. If we just use the longest CS-so-far that the new match will fit onto, we’re guaranteed safe — we don’t even need to try the others. When we’re deciding which existing CS(es) to extend with our new match, only two things matter: the existing CSes’ length, and whether we can append our new match to them. After all, once we’ve appended the same new match onto a bunch of different CSes, they lose their previous identity; now their last matches are all the same, and the only thing that can vary is their length. So scenario 5 kicks in, and all but the longest of this newly-minted batch of CSes will drop out. Why bother with anything but the longest to begin with?

What if there’s more than one “longest CS that our match will fit onto”? For example, let’s say we already have these CSes:

  • [0,3]
  • [4,1]
  • [4,1] [6,2]

If the next match is [5,5], what do we do? First we look at our longest CS so far, [4,1] [6,2]. Can we append [5,5] to that, and still have a valid CS? No, we can’t, so we have to go for the shorter CSes. Can we append [5,5] to [0,3]? Yes. What about [4,1]? Yes. So we now have these CSnews:

  • [0,3] [5,5]
  • [4,1] [5,5]

Scenario 5 doesn’t help here, because these are both the same length. No, this is a job for scenario #14:

# Len AI BI CS1 CS2 Make CS1 win Make CS2 win Exp
14 = = = [3,3] [3,3] None possible None possible Either

Meaning, if CS1 and CS2 are both the same length, and both end in the same match, then we should throw one of them away, and it really doesn’t matter which.

With all this in mind, we can make another change to our algorithm’s outline, this time to the third loop (with a minor tweak to the fourth):

  • Loop through all the CSes we have so far, going from longest to shortest. Stop on the first CS that we can construct a longer CS from by appending [i, j].
  • Loop through all the CSes again, calling Expendable on [CSk, CSnew] and discarding CSes as needed.

A couple of finer points to mention here:

  • If we never find a CS that we can extend with [i, j], then we still generate a CSnew. In this case, our CSnew would be of length 1, containing only our new match. (This was true of the algorithm I sketched out on Monday, too; I should’ve specified this then. For example, if the only CS we have so far is [1,4], and our new match is [4,2], then we add [4,2] to our CSes-so-far list.)
  • If we have multiple existing CSes that have the same length (e.g., [0,3] [1,4] and [3,0] [4,1]), it doesn’t matter which of them comes first in our “longest to shortest” ordering. It’s not important whether our iteration visits [0,3] [1,4] or [3,0] [4,1] first, just that we don’t try either of them until we’ve tried everything longer, and that we try both of them before we try anything shorter.

This is a nice optimization. We just took two nested O(N) loops, and made them into two successive O(N) loops. This leg of the algorithm used to be O(N2), and now it’s just O(N). Making progress.

Diff, part 17: Optimizing match discovery

The next really interesting topic in the diff series is going to take more time to write than I have this morning. So today, I’ll talk about a fairly simple optimization to the match-finding loops.

Our outer two loops looked like this:

  • Loop through all the elements in A, and for each Ai:
    • Loop through all the elements in B, and for each Bj that matches Ai:
      • (do some other stuff)

Now, imagine that we have input that’s a typical source-code file, perhaps C# or Delphi code, and that we’re ignoring leading whitespace as we compare.

In a source file of any length, we’re going to have repetition. There will be blank lines in both files. There will be lines containing only ‘{‘ (C#) or only the keyword ‘begin‘ (Delphi), which all compare the same because we’re stripping off the leading whitespace before we compare. Same thing for ‘}‘ or ‘end;‘.

Every time we come across one of those duplicated lines, we do a bunch of wasted work. Let’s see how:

  • Get the first element from A. Let’s say it’s “unit Foo;“.
    • Loop through all the elements in B, and for each Bj that matches “unit Foo;“, do some stuff.
  • Get the next element from A. This one is the empty string, “”.
    • Loop through all the elements in B, and for each Bj that matches “”, do some stuff.
  • Get the next element from A: “interface“.
    • Loop through all the elements in B, and for each Bj that matches “interface“, do some stuff.
  • Get the next element from A: the empty string again.
    • Loop through all the elements in B, and for each Bj that matches “”, do some stuff.
  • Get the next element from A: “uses SysUtils, Classes;“.
    • Loop through all the elements in B, and for each Bj that matches “uses SysUtils, Classes;“, do some stuff.
  • Get the next element from A: the empty string again.
    • Loop through all the elements in B, and for each Bj that matches “”, do some stuff.

Every time we get “” from A, we loop through B looking for all of its “”s. But we already did that. Couldn’t we save ourselves some work if we just remembered where all the “”s are, so we can just loop over the interesting bits of B, rather than the whole boring thing?

Indeed we could, and it would save us time even if there isn’t any duplication between the two lists. We do it by replacing those two outer loops with this:

  • Create a Hashtable called BHash.
  • Loop through all the elements in B, and for each Bj:
    • If BHash[Bj] is nil, then BHash[Bj] := TList.Create;
    • BHash[Bj].Add(j);
  • Loop through all the elements in A, and for each Ai:
    • Loop through all the matching “j”s we’ve already listed at BHash[Ai]:
      • (do some other stuff)

We start with a single pass through B to build a reverse-lookup table. (Ideally we would use a genuine Hashtable, so that adds and lookups are both always O(1). If we’re using Delphi for Win32, which doesn’t have a real Hashtable, we’ll have to fake one, or implement one from scratch.)

This reverse-lookup table maps each element in B to a list of indexes where that element appears:

BHash[“unit Foo;“] = (0)
BHash[“”] = (1, 3, 5, …)
BHash[“interface“] = (2)
BHash[“uses SysUtils, Classes;“] = (4)

Now, when we get to an element in A, we’re not looping through every element in B. That was a bit wasteful to begin with, because for typical input, most of those elements in B wouldn’t match — we were looping over all N even though only a handful will actually matter. Now, with the addition of the reverse-lookup hash, we only look at the elements in B that do match.

So the second loop, instead of being O(N), is now O(D) — the number of iterations is roughly proportional to the number of duplicates in list B. Actually, list A figures into that calculation too, because we’ll only care about those duplicates in B if they also appear in A — after all, if they don’t appear in A, we’re never going to look them up in B.

D may approach N, if the lists contain almost all duplicates (e.g., if both input files consist of nothing but blank lines); or it may approach 1, if the lists contain almost no duplicates. It may even approach 0, if the two input files have nothing in common, because every time we go to look for something in BHash, there’ll be nothing to find and therefore nothing to iterate.

So our outer two loops, instead of being O(N2), are now O(ND). Not bad for a short morning’s work.

Diff, part 16: A look back, and a big O

I’ve been talking about the diff algorithm for three weeks now, and I’m proud to say that I now own the phrase “stupid diff” (at least as far as Google cares, which is all that really matters anyway). I think I’ll start putting a little trademark symbol after it.

[Update, 19 Apr 2005: Only a day after I posted this, and I’m no longer the #1 Google result for “stupid diff”. Oh well. Easy come, easy go.]

At this point, I’ve presented all the parts and pieces needed to make a serviceable Longest Common Subsequence algorithm. Now let’s look at just how serviceable. How efficient are these parts and pieces?

Last time, I said we’d reached polynomial time. But what degree polynomial are we talking about? Here’s a quick sketch of what the algorithm does:

  • Loop through all the elements in A, and for each Ai:
    • Loop through all the elements in B, and for each Bj that matches Ai:
      • Loop through all the CSes we have so far, and for each CS that we can construct a longer CS from by appending [i, j]:
        • Loop through all the CSes again, calling Expendable on [CSk, CSnew] and discarding CSes as needed.

Looks like our algorithm, so far, is O(N4). Sure beats our old super-exponential time, but N4 still leaves something to be desired.

Can we do better? You bet. In fact, by the time we’re done, the limiting factor is going to be how efficiently we can find matches: those outer two loops. Next time, I’ll take a look at making that a little more efficient (though still technically O(N2)), and start setting the stage for some of the later improvements.

How to convert a Unitarian, and good luck

For your reading amusement: Unpacking Unitarian Universalism.

Warning: If you’re a UU, you may want to make sure you’ve taken your blood-pressure medication before you read this. It’s a page telling Christians — Southern Baptists in particular — how to bring Unitarian Universalists to Jesus.

They do admit, though, that “By any standard, Unitarian-Universalists are a hard people to win to Jesus. You might even say that they are impossible to win, but with God all things are indeed possible.”

Indeed. All things are possible. Including the rejection of any religion that believes it’s got a monopoly on truth.

Lord, forgive me, for I am forming a renegade covenant group

Two particularly interesting things happened today. Perhaps the most interesting thing is that they’re related, in an odd sort of way.

After church today, we had a town-hall meeting to talk about covenant groups. Covenant groups are something my church is going to be starting soon, where small groups (8 to 10 people) get together twice a month and share their stories. The meetings have two parts: in the first part, people tell what’s been going on in their lives since the last meeting; the second part is discussion on a particular topic, with the focus on people’s individual experiences. It’s largely about community-building; building deeper connections with other people in the church.

Community is one of the big reasons I started going to church again, so I went to the “try it for a night” covenant-group event a couple of weeks ago. And liked it. I don’t remember what the discussion topic was, exactly, but I remember that it prompted me to talk about my experiences with blondes and Thursdays. (Long story.) By the time the meeting was over, our whole group was agreeing that, if we would have continued that group as a formal covenant group, we should definitely meet on Thursdays.

Only one meeting, with people I had never really met before. And it was very cool, because we felt safe sharing things that we wouldn’t share with just anybody under just any circumstances. (Hence the Thursday thing.) It reminded me of two other groups I’ve been in at this church — the men’s group, and the adult OWL group — where I’ve also gotten to know people on a deeper level, and built that same sort of trust and connection. I’ve always been an introvert, and a loner for much of my life, but… this is something I really like. I’ve never had many friends, but those few were close. And covenant groups would give me more.

I’m just disappointed that they’re not going to start covenant groups for real until next fall. I still haven’t gotten used to the way Unitarian churches basically shut down over the summer. (They’re also not through planning all the details of how the groups will work, so it can’t all be blamed on summer vacation, but still.)

The town-hall meeting was a chance for people to ask questions and talk about the whole covenant-group idea. There were some very good questions, and good answers. And then, toward the end of the meeting, I raised my hand and got the mike, and said basically what I just wrote — that I’ve always been an introvert, but that I really like groups where I can build these sort of relationships with people — and that if they got something started over the summer, rather than making us wait until fall, it wouldn’t hurt my feelings a bit.

After the meeting, Eric, one of the people who’d been in my group at the “try it for a night”, talked to me, and suggested that if the official program wasn’t planning to start until fall, then maybe he and I could try to get something together before then. The idea wouldn’t have occurred to me, but I didn’t take much convincing. We talked to the minister (he’s heading the covenant-group program), suggesting that we start a sort of pilot program over the summer, with the expectation that not all the wrinkles would be out of the system yet; and he seemed open to the idea. He (the minister) will put out some feelers to see if anyone’s interested. To hear the comments of the people who’ve been in the first pilot program, I wouldn’t be surprised if some of them wanted to do a group over the summer.

We’ll have to see where it goes, but whether it’s over the summer or not until the fall, I’m definitely looking forward to the covenant-group stuff. It’s forming relationships with other people, it’s having spiritual discussions (something else I’ve been looking for); it’s spiritual growth. Most of the spiritual growth I’ve done in my life has been on my own, or, sometimes, while reading books. But it’s occurred to me that I really haven’t done a lot of it lately. I’m anxious to start again.

Which brings me to the second interesting thing that happened today. After playing video games all afternoon (with a two-hour break for a nap), I finally, a little after 11:00, decided I should probably fix some supper. On a whim, I went to eat in the bedroom, and flipped on the TV. The channel it happened to be on (ABC Family, I think) was showing a Christian minister speaking to a baseball stadium full of people.

Now, I’m not a Christian. I’m not really sure what I believe in, but some of the core Christian beliefs just don’t work for me. (Some days, I’m actually a little envious of Christians who are sure of the answers, when I’m not even sure of the questions.) Most days, I would’ve flipped the station. Today, again on a whim, I listened half-interestedly (at first).

They were evidently showing a service that had been taped on an Easter Sunday, and the preacher, a guy named Joel Osteen, was talking about forgiveness. And he had some interesting things to say.

(Side note to any Christians in my audience: Don’t think this means I’m about to go and get saved. I’m a Unitarian. We’re tough to convert. But I wouldn’t be much of a Unitarian if I weren’t looking to adopt the best features from every religion, and leave the rest.)

He talked about people he’s known who have been through rough times — a divorce, perhaps — and while they moved on with their lives and forgave everyone else, they couldn’t forgive themselves. He said that’s ridiculous, because God had already forgiven them — that, in his language, believers are automatically forgiven for all their sins, because of Christ’s sacrifice on the cross; that any of their sins were in the past, and God had already forgotten them; so what right did these people have to hang onto guilt that was already done and forgotten in God’s eyes?

Self-forgiveness. I listened to this guy, and I started to think. I’ve been under a lot of stress lately, and it’s getting to the point where stress and anxiety are starting to keep me up at night. Some of it is work-related, some of it is money-related, but I’ve started to suspect that most, or all, of it is coming from me, not from the people around me. And listening to this preacher, I started to wonder how much of it came back to me feeling guilty about one thing and another — about things that are in the past, that I can’t change. I started to wonder if there’s room for some self-forgiveness in there.

I went for a walk after that. I used to go for late-night walks, back when I was in high school and college. And I would think about things. When I think back, and try to figure out just what it was I would think about out there, I draw a blank, but I loved those walks. They recharged me. They got me away from the stress of the rest of my life, and brought me back to me. I think, now, that I used them as time to let my spiritual side out, just for a little while.

I haven’t been on those late-night walks as much lately, because they don’t recharge me the way they used to. But I went on one tonight, and I didn’t think about money, and I didn’t think about work. I thought about music, I thought about the book I want to write, I thought about what the preacher had to say on TV, I thought about a lot of other things. I didn’t resolve anything, but I wasn’t expecting to.

Stress. Guilt. I do have to wonder how closely they’re tied, inside me. I’ll keep on wondering, and trying to figure it out. But man, I wish I could talk about this in a covenant group.

Footnote: I didn’t realize it while he was talking, but after he finished speaking, they showed a picture of Joel’s book, “Your Best Life Now”, and I realized that Jennie already has a copy of it. I think, now, that I’ll have to read it.

Then he urged the viewers at home, those who weren’t already Christians, to accept God’s forgiveness, and go out and join a Bible-based church. I had to laugh at that. No offense intended to any of my readers, but I already tried a Bible-based church, several years ago — they’re the biggest reason I’m not a Christian.

Kitchen debugging

My homemade remedy didn’t last long. The ants got back into the kitchen Friday while I was at work.

Saturday morning, I spent about fifteen minutes outside, antwatching. Trying to figure out where their anthill was. I never did quite find it, but I figured out the general area, and started spraying (with the bug spray that the previous owners didn’t take with them when they moved out).

Wow, it worked. That particular ant colony now consists only of the few ants that were already in the kitchen, and never quite managed to find their way back out through the dish-soap gauntlet. I saw (and smushed) one straggler this afternoon, and otherwise none since yesterday.

Man (or rather, man’s consumer chemical industry) triumphs versus insect. It’s a small victory, but hey, you take what you can get.

Ants in the kitchen

Ain’t it great when you come home from work and find a line of ants straggling across your kitchen? Uy vey.

I applied my two homemade remedies, learned years ago. #1: Ants cannot swim through dish soap (the kind you use for hand-washing dishes), so pouring a line of dish soap across their path will cut them off quite effectively. Best done at the point where they’re entering the house, so they can’t find a way around. And #2: squash all the ants that are already inside the house. Make sure to do #1 first.

This seemed to work okay, as I only saw four ants this morning, presumably ones that were hiding when I applied remedy #2 last night. We’ll see how things look tonight. I also probably ought to clean the kitchen so there’s less for them to find, but we all know how that goes.

Fortunately, they hadn’t found the carton of sugar yet (or we would’ve had ant gridlock), although I found three or four ants in the cupboard where we keep it. I relocated the sugar to a much-more-distant cupboard for the time being (i.e., either until it looks like they’re not coming back, or until I call Orkin).