The Advent of Code 2023

The Advent of Code 2023

It is time to be honest, and say "I will not finish it". Come on, it is the middle of February. But it was a good run anyway.

If you like: WTF is Advent of Code <-- click that link.

For the first time, in many years since I first heard about it, I found time to participate. I solved twenty and a half out of twenty-five puzzles. Up until tenth day, I was able to keep up with a few hours after work schedule. After that, puzzles became more complex, and I started to lag behind. And day twelve was the day when I could not solve second part of a puzzle at all.

I used Rust, and it was a good choice. I do not think I would be more productive with any other language. At the same time, I feel like now I have fewer skill issues than before. This is the exact result I hoped for. So, no regrets.

One of unexpected side effects was that I started to feel numbers. I know that it sounds weird, but I can not find a better way to describe it. You know when you look at the number and feel the urge to deduct one and divide it half. Which turns out to be the answer you are looking for. That kind of feeling.

Will I participate in this year’s Advent? I hope so. I enjoyed the puzzles and the feeling when you finally figured out how to solve them. So yeah, if nothing out of the ordinary (the bar for the ordinary is already really high) happens, I will participate in the next Advent of Code.

Below are my notes. If you still want to solve this year’s puzzles, SPOILER ALERT.

You can find the code and unedited notes at the github repo.

Day 1

Off by one errors everywhere.

Day 2

Today's calculation will have two parts: one and two.

The initial idea of using PartialOrd to compare Round's was bad.

Day 3

IDK if I should support all previous days. It seems like little work.

Today's task is much more complex than previous ones. Tests are required.

There is a huge potential to optimize, but it is Sunday, and I still need to finish the first Fallout.

Day 4

Part one was surprisingly easy.

Part two was hard to understand, probably because of the fever.

Day 5

The brute force approach takes too much time.

rayon goes brrrr

The test case for part two passes, but the answer is incorrect. I am a sad panda.

Oooooh. The range in maps is not inclusive. So, I passed the first stage and test assignment by luck :)

Day 6

That was easier than day 1.

Day 7

The day when my neat little abstraction broke on part two, and now it is not as beautiful as it can be.

Day 8

Today is the day when brute force is no go.

After a couple of hours of looking at the debug output - we are looping with the same intervals. Which means what we need to do is find all loop lengths and the LCM of those.

My math is rusty. Let’s do naive LCM.

The result in 3m is ok. I guess.

I could not relax, so I googled how people calculate LCM. It turned out that more than 2000 years ago, our boy Euclid had already figured it all out.

Now it is blazingly (TM) fast.

Day 9

To solve today's riddle, I wrote the least amount of code.

Day 10

I was trying to clear the map from unconnected pipes instead of following the trail. I spent a few hours before realizing there was an easier way to solve part one.

Part two.

Ok, first idea, since we have our loop of pipes from part one. Let’s replace all tiles that are not connected to the main loop with zeroes.

I think I can simplify the loop itself by shortening 180 turns. For example, this transformation:

0000
0F70
0||0

Can be:

0000
0..0
0F70

Without losing information.

That was a bad idea. I spent too much time implementing it before realizing it did not lead me anywhere. The better one is to follow the loop and mark left/right. One of the sides will be inside and the other outside. Fill the void and count.

The code is ugly as hell, but it gives the correct result.

I added some love to it before starting the next day. Also, the way I did pipes sucks.

Tile::Pipe(Direction::North, Direction::East)

That's an L pipe. Yeah, I know. It’s not obvious at all.

Day 11

Somehow, columns are hard. I have spent too much time implementing universe expansion. Everything else was easy.

Part two does not fit into memory. Even if I make it fit into the memory with bitvec or a similar bit-manipulating library, the amount of data to process will be too big to process in a sane amount of time.

The current idea is to expand the universe a few times. Find the speed at which galaxies are moving away from each other. Knowing the speed end distance should be easy to find.

I am already lagging one day behind, so the best move is to go to sleep.

That Idea about speed was terrible. I spent too much time figuring out how to measure the expansion rate, and none of my measurements made sense. Anyway, the working idea is that instead of expanding the universe, you can gather the position of galaxies and manipulate those positions.

I will not rewrite the first part.

Day 12

TBH, I do not know where to start. I will start with a nap.

Part one was hard. I had no clue where to start until I saw this post: Solving Nonograms with 120 Lines of Code.

Again. Tests are green, but the result is wrong :(

I gave up and looked at Reddit. This was the test case that I was missing ??#???##?? 1,3

I did not solve part two with brute force. Despite spending 46 nanoseconds on checking each combination, some rows had so many of them that solving them requires a few hours.

With a Christmas party and hangover after it, I am lagging five days behind :(

Ok. Instead of generating all possible combinations, I can use a tree. ??#???##?? 1,3 can be

                            _/\_
                          0|0  0|0
                    -------.    #------             1) '.', 2) '#' - 1,2 are possible
                  1|0     1|0  1|1    1|1 
                   .       #    .      #            1) '..', 2) '.#', 3) '#.', 4) '##' - 1,2,3 are possible  
                  2|0     2|1  2|1     
               ----#--     #  --#------             1) '..#', 2) '.##', 3) '#.#'  - 1,3 are possible 
              3|1    3|1     3|1      3|1
          -----.----  #       .   -----#---         1) '..#.', 2) '..##', 3) '#.#.', 4) '#.##' - 1,4 are not possible
         4|1      4|1            4|1     4|1
        --.----    #-------       .     --#----     1) '..#..', 2) '..#.#', 3) '#.##.', 4) '#.###' - 1,2,4 are possible
       5|1   5|1  5|1     5|1          5|1    5|1
        .     #    .       #            .      #    1) '..#...', 2) '..#..#', 3) '..#.#.', 4) '..#.##', 5) '#.###.', 
       6|1   6|1          6|1          6|1              6) '#.####' - 1,2,4 are possible
        #     #            #            #           1) '..#...#', 2) '..#..##', 3) '..#.###', 4) '#.###.#' 
       7|1   7|1          7|1                           - 1,2,3 are possible
  ------#     #-------     #                        1) '..#...##', 2) '..#..###', 3) '..#..####' - 1,2 are possible
 8|1   8|1   8|1    8|1   
  .  ---#-    .      #                              1) '..#...##.', 2) '..#...###', 3) '..#..###.', 4) '..#..####'
    9|1 9|1  9|1                                        - 2,3 - are possible
     .   #    .                                     1) '..#...###.', 2) '..#...####', 3) '..#..###..' 
     ^        ^                                         - 1,3 are possible

Where (spring index) | (group index)

Now, while I can eyeball what is possible and what is not, somehow, I can not think of a way to solve it elegantly :(

Yeah, so a few hours and a bunch of ifs later, tests are passing. The first part produces the same result as before. Everything is super fast. But, the result is too low.

I added more tests. They are also green. The result is still wrong. It may be time to give up and go to Reddit.

All test cases that I found on Reddit are also green.

I am giving up. I can't come up with a test case that would fail. I will go and do other puzzles.

Day 13

Part one was easy, which had a healing effect on my ego after the previous day's failure.

The main struggle in the second part was getting what I needed to do. But I managed.

Day 14

Part one is solved by splitting and sorting.

I knew that brute force would not work. I tried anyway, and even a small test case was not finished in a reasonable time. Debug output shows that the results are cycling after some number of iterations. The current plan is:

  • cycle detection
  • cycle length/start extraction

Easy.

TBH, I did not expect it to be easy. But it was.

Day 15

Part one is surprisingly easy.

Every day, I fear that it will be day 12 all over again. But not this day.

Day 16

Moving the light beam was easy since I already had stuff like Map and Direction from previous days. Only complication was the fact that there is a lot of movement, and you need to remember from which tile to which direction you have already moved.

Also, today, I managed to do three parts before going to sleep. Kind of proud of myself :)

Part two was even easier. I thought that I would need to use some black magic to speed things up, but everything was fast enough already.

Day 17

Of course, I can google Dijkstra's algorithm, but it would be too easy.

I am so proud of myself. My homegrown algorithm does find the shortest path on test data in only 10 minutes :) It dies with stack overflow on the actual data. It may be time to call Dijkstra for help.

I spent some time reading Dijkstra's algorithm and can't figure out how to apply the no more than three steps in one direction rule to it. But reading it gave me an idea that I want to try.

For every point on the map, we have a few directions we can go. The amount of those directions depends on the three previous ones. This means that for every pair of ([Option<Direction>;3], index), we can remember the minimum distance.

That works very well. The test run finishes in 0.04 seconds, much better than 10 minutes. Run on actual data still fails with stack overflow.

LLDB shows that Iterators occupy too much space on the stack. I replaced them with a for loop. It’s still not good enough. The question is, should I continue to look for a way to optimize the amount of branches, or should I look for a completely new approach.

Ok. The way to reduce branches is to pass a reference to the current best, and if the current path produces greater value, then there is no sense in continuing. The algorithm gives a correct result on test data and finishes on actual data with the result 1044, but That's not the right answer; your answer is too high. Curiously, it's the right answer for someone else; you might be logged in to the wrong account or just unlucky. Yes. That's who I am. Unlucky :)

In the current implementation, I am getting the current_best from the path only when the algorithm is on the last tile because of caching, which happens only a few initial times. At the same time, the cache contains the missing part of the calculation. I can improve that part while thinking about fixing the errors.

While updating that thing, I have one more idea regarding current_best instead of usize::MAX I can use the worst case, which is the longest path times the highest tile value. Where the longest past is the length of (side 1 + side 2) * 9.

That made the test fail. This makes me think that current_best and caching prevent us from visiting some tiles. Without cache, everything is too slow, so I am back to the same question - should I continue looking for a way to optimize the number of branches, or should I look for a completely new approach? Looks like optimizing branches is not as easy as I hoped. It’s time to get off the keyboard and think of a new approach.

The best idea I come up with so far is to unroll recursion. I will do that and see how it goes.

I am not smart enough to unroll that recursion. Instead, I’ll try to do some kind of Dijkstra's algorithm. Here is what the rough idea looks like. Initialize the visited vector with None. Mark the start index as Some(0). For every visited tilewe get all unvisited neighbors with a list of values from visited tiles. For every unvisited neighbor, we get the lowest value from the list and put the sum of that value and the value of the current tile to the visited vector. The lowest value at the end will be the shortest distance.

The first iteration comes close to an actual value but is not good enough. Best guess when unvisited neighbor has visited ones that have the same value, 11 and 11, for example, I do not know which one to choose, so I am choosing the first one. That will impact the route later because of the rule of three from the task.

Let's zoom out a bit.

1 2 3 4 5
2 2 3 4 5
3 2 3 4 5
4 2 3 4 5
5 2 3 4 5

Consider how I can get to the tile (3,3):

  • (0,3) E (1,3) E (2,3) E (3,3) from E with 3 E
  • (1,2) S (1,3) E (2,3) E (3,3) from E with 2 E
  • (2,1) S (2,2) S (2,3) E (3,3) from E with 1 E
  • (3,0) S (3,1) S (3,2) S (3,3) from S with 3 S
  • ...

This means my node looks like this: (x,y, direction_from_where_we_got_here, amount_of_steps_to_the_same_direction) -> distance.

It took me some time to make it work. But in the end, I have two steps. First - building a graph. Second, implement a good enough Dijkstra algorithm to find the shortest path on that graph. It can be better. But it is good enough already.

Part two required building a slightly different graph, which was hard because of a New Year celebration.

Happy New Year, BTW.

Day 18

Part one was relatively easy to build since I had already built most moving parts (Map, Directions).

The current map expansion is way too slow for part two.

Ok. I made it faster. Now, I am running out of memory. Even if I use u8 instead of char as my tile, it still tries to allocate more than 150 gigs of RAM.

There should be a formula to calculate the area of a polygon.

It turned out there is Shoelace formula with straightforward implementation.

The answer is too low. Hmmm.

It gives almost the expected result on test data:

  • 952404941483
  • 952408144115

I suspect the Shoelace formula does not account for the 'trench', which should be our perimeter.

No. It is too big.

Can it be half of it? It is. Almost. I don't know why (I am too sleepy to figure it out), but the difference between the Shoelace formula result and the expected area is perimeter / 2 + 1.

Day 19

Part one has complicated rules to encode. And maybe I over-engineered it a bit.

The same is true for part two. Even after I wrapped my head around what needed to be done, keeping all requirements in mind was hard. Proper naming helps a lot.

Day 20

It is this kind of day; tests are green, but the result on actual data is too low.

I don't see any obvious error and am too sleepy to continue looking.

Yep. That was a bug in parsing. Let’s see what’s in part two.

Part two always starts with "I can brute force through it.” Didn't work this time.

While looking at various debug outputs, I noticed a pattern:

button press# 3778, origin: 7, value: High
button press# 3888, origin: 28, value: High
button press# 3906, origin: 53, value: High
button press# 4050, origin: 38, value: High
button press# 7557, origin: 7, value: High
button press# 7777, origin: 28, value: High
button press# 7813, origin: 53, value: High
button press# 8101, origin: 38, value: High
button press# 11336, origin: 7, value: High

I need all those origins to be High at the same button press. Which looks like a task for the LCM. I will check it out tomorrow.

Yeah. I was on the right track yesterday. With one caveat, the answer is an LCM of button presses amount. The count should start from 1, not from 0.

Day 21

The first part was straightforward to brute force. Just flip tiles on the map and then count the values.

In part two, it was obvious from the beginning that the growing map would not fit the memory.

First Idea. I will have a queue where I will push back the next positions on the map and pop front positions that I need to calculate on the current step.

Hm. It gives me more positions that I need. Oh. Oh. One point gives four possible points on the next step. Each point on the next step will return to the previous point. And I need only one of those. I can't figure out anything smarter than a HashSet.

It takes too long to compute 1000 steps on test data. I need to figure out if the result has a repeating pattern. The result is growing with each step. The rate of growth is also growing. What about that rate?

There is a pattern. Starting from step 39, the second derivative is 2 on test data in every eleventh step.

39: 944 d: 50, d2: 2
40: 989 d: 45, d2: -5
41: 1053 d: 64, d2: 19
42: 1107 d: 54, d2: -10
43: 1146 d: 39, d2: -15
44: 1196 d: 50, d2: 11
45: 1256 d: 60, d2: 10
46: 1324 d: 68, d2: 8
47: 1383 d: 59, d2: -9
48: 1464 d: 81, d2: 22
49: 1528 d: 64, d2: -17
50: 1594 d: 66, d2: 2
51: 1653 d: 59, d2: -7
52: 1735 d: 82, d2: 23
53: 1805 d: 70, d2: -12
54: 1853 d: 48, d2: -22
55: 1914 d: 61, d2: 13
56: 1988 d: 74, d2: 13
57: 2072 d: 84, d2: 10
58: 2145 d: 73, d2: -11
59: 2244 d: 99, d2: 26
60: 2324 d: 80, d2: -19
61: 2406 d: 82, d2: 2
62: 2479 d: 73, d2: -9
63: 2579 d: 100, d2: 27
64: 2665 d: 86, d2: -14
65: 2722 d: 57, d2: -29
66: 2794 d: 72, d2: 15
67: 2882 d: 88, d2: 16
68: 2982 d: 100, d2: 12
69: 3069 d: 87, d2: -13
70: 3186 d: 117, d2: 30
71: 3282 d: 96, d2: -21
72: 3380 d: 98, d2: 2

And while all other values are all over the place, the sum of the second derivatives of those ten steps is always 16. There are two problems. While I can eyeball it, I can't come up with a way to detect such a pattern, and most importantly - I am still trying to figure out how that can help me solve my problem.

The map from the test input is square, and eleven is the length of a side.

Before I think about how that can help me and then how to detect such a pattern, I want to check if such a pattern even exists on real input.

I can not eyeball it on 1000 steps. After adding grouping by the second derivative, the pattern is visible there. Starting from step 132, 0 repeats every 131 steps. 131 is the length of a side in the actual input. Looks like this is the way to detect a pattern. Get a side length, and iterate through the steps until a second derivative is repeated.

Let's figure out what to do with it tomorrow.

It has been a long time since I wrote the previous sentence. I got sick and was in bed without any energy to continue. Now, I hope I am back to pre-sickness performance levels. Let’s figure out what’s left.

131 does not help at all. 26501365 - is the number of steps. It is not divisible by 131. 26501365 - 133 is also not divisible by 131. So yeah, while there is a pattern, I do not see how it can help me.

I was playing a bit with numbers. IDK if that has a meaning but 202300 * 131 + 65 = 26501365

65 + 131 * 0 = 3_917
65 + 131 * 1 = 34_920
65 + 131 * 2 = 96_829
65 + 131 * 3 = 189_644
65 + 131 * 4 = 313_365
65 + 131 * 5 = 467_992

Does it make any sense?

It does. Oh boy, it does. d1 - first derivative. d2 - second derivative.

N     Value       d1         d2
0    3_917
1    34_920      31_003
2    96_829      61_909     30_906
3    189_644     92_815     30_906           

This means I can use simple math to calculate needed values like that.

1: 3_917 + 31_003 + 30_906 * 0 = 34_920
2: 34_920 + 31_003 + 30_906 * 1 = 96_829
3: 96_829 + 31_003 + 30_906 * 2 = 189_644
4: 189_644 + 31_003 + 30_906 * 3 = 313_365
5: 313_365 + 31_003 + 30_906 * 4 = 467_992

Now I need to repeat that 202300 times. Easy.


As an afterword, I can say that time I spent on Advent of Code was well spent. I had a lot of fun, and I am looking forward to the next one.