Click here to view on the original site: Original Post

It’s been a good run, but my thrill for blogging has ebbed. Most of my online conversation occurs on Twitter these days, and my Buckblog has definitely felt the lack of attention.

So, it is with some sadness that I announce the end of the Buckblog. Nothing is actually going away: the articles and existing comments will all remain where they are, but I won’t be posting any more, and no more comments will be accepted on any of them. It’ll all be “read-only” from here on out.

I have no doubt that in coming months and years I’ll get the itch to post something that won’t fit on Twitter. I’m not sure yet how I’ll work that. Maybe I’ll create a Tumblr account for the purpose. Or maybe I’ll just post static HTML pages and link them up via Twitter. I figure I’ll cross that bridge when I come to it.

Click here to view on the original site: Original Post

I’ve been working more closely with Ruby 1.9 and Rails 3 lately, and while in general it’s been smooth going, there was one particularly disappointing road bump today.

Consider the following code, from a Rails functional test:

This works great with ruby 1.8.7. The tests call sets the SessionController as the controller under test, and the subclasses gain access to that via the “inheritable attributes” feature of ActiveSupport.

Sadly, this does not work in ruby 1.9. Those tests have errors now, saying that the controller instance variable needs to be set in the setup method, because the inheritable attribute of the parent is no longer being inherited.

Some digging and experimenting helped me pare this down to a simpler example:

If you run this example with ruby 1.9, it will print “5”, and then “nil”. And the most telling bit: if you comment out the superclass from A’s definition, the program will then print “5” and “5”. It was obviously something that MiniTest was doing.

Another 30 minutes later, I had my answer. MiniTest::Unit::TestCase was defining an inherited callback method, to be invoked every time it was subclassed:

123

defself.inherited klass # :nodoc:@@test_suites[klass] = trueend

ActiveSupport, too, is defining a version of the inherited method, monkeypatching the Class object directly so that subclasses can get a copy of their parent’s inheritable attribute collection. And because MiniTest is descended from Class, MiniTest’s version was overriding ActiveSupport’s version, causing the inheritable attributes to never be copied on inheritance.

Frustrating!

All it takes is the addition of one line—just a single word!—to “fix” MiniTest’s version:

1234

defself.inherited klass # :nodoc:@@test_suites[klass] = truesuper# <-- THIS RIGHT HEREend

You can argue that ActiveSupport shouldn’t be monkeypatching system classes like that. Maybe, maybe not. The point is, it is, and it actually does it in a pretty “good neighbor” way. It saves a reference to any prior “inherited” definition, and calls it from within its version, chaining the calls together.

Sadly, MiniTest’s assumption that it would be the only consumer of the inherited hook in its ancestor chain totally kills ActiveSupport’s attempts at working together. I’ve had to resort to calling the tests helper in each test subclass, explicitly. Not a huge deal, but I’d sure love to have back the two hours I spent debugging this.

The lesson? Always be a good neighbor. Never assume you are the only kid on the playset. Call super when you override a method. Create a call chain when you replace a method in situ. Think of the children!

Click here to view on the original site: Original Post

My previous post showed one way to create “weave mazes”, or mazes in which the passages weave over and under one another. The technique I showed was flexible, and could be implemented using several different maze algorithms, but it has (at least) one big drawback: it’s not predictable. You don’t know how many over/under crossings you’ll get until the maze has been generated, and for small mazes (5×5, for instance) you may very often get no crossings at all.

In the comment thread for that post, Robin Houston described an alternate algorithm. Instead of building the crossings as you generate the maze, you do it as a preprocessing step. You scatter crossings (either at random, or deliberately) across the maze, and then generate the maze so that they connect correctly.

Sounds like it could be tricky, yeah? You’ve got all these independent connections, like little graphs floating around, all disjoint… If only there were a way to generate a maze by connecting disjoint trees into a single graph…

What’s that? Oh, right! Kruskal’s algorithm does just that! Let’s take a look at how easy Kruskal’s makes this.

Weave mazes via preprocessing

I’m not going to review Kruskal’s algorithm here. If you don’t remember the details, I strongly recommend you read (or re-read) my article on Kruskal’s algorithm. Don’t worry, I’ll wait.

Seriously, read it. The rest of this post won’t make much sense unless you understand how Kruskal’s works.

Got it? Alright, let’s proceed.

So, let’s assume you’ve got your blank grid, and you’ve set it up (just like Kruskal’s wants) so that each cell is the root of its own (one-node) tree. You’re about to generate the maze…

First, though, we do the preprocessing step: let’s scatter over/under crossings about the grid. We have a few constraints:

no crossing can be placed on the edge of the grid (because that would imply a passage moving outside the grid).

no crossing can be placed immediately adjacent to another crossing (this just simplifies things—allowing adjacent crossings appears to introduce a surprising amount of complexity).

if the y-1 and y+1 (north and south) cells are already connected (in the same set), the crossing must not be allowed (because it would create a loop).

similarly, if the x-1 and x+1 (west and east) cells are already connected, the crossing must not be allowed.

So, we place a crossing. We randomly decide whether the over-passage is north/south or east/west, and then carve the appropriate values into the grid.

However, since we’re dealing with Kruskal’s algorithm, we also need to update the connection sets, and then remove the relevant edges from the edge set. Because we don’t allow adjacent crossings, we don’t ever have to worry about things connecting directly to the cross-over cells (this is why allowing adjacent crossings gets complicated). So, to update the sets, we just join the north/south cells, and the east/west cells. And then we remove the edges connecting the cross-over cell, and its adjacent cells.

A lot of words, but not so much work, in practice!

Once you’ve set all the over/under crossings, you’re ready to actually generate the maze. And the ahem amazing thing about Kruskal’s algorithm is, if you’ve correctly handled the edges and connections in the preprocessing step, you can run it without modification at this stage. The result will be a perfect, weave maze!

For your enjoyment, here are some demos to play with. Try the different settings to see how the output changes: particularly, notice how Kruskal’s using the naive “weave as you go” approach generates far fewer crossings (on average) than the approach described here.

Since the only thing that changes between this weave technique and the non-weave Kruskal’s algorithm is the pre-processing step, I’m just going to go over the pre-processing step here.

You will probably also want to use a rendering method that allows you to unambiguously show the over/under crossings, such as the method using unicode characters that I described in my previous article.

Now, there are a lot of ways you could go about scattering crossings across the grid. For simplicity, I’m going to just iterate over each cell and randomly decide whether a crossing should be placed there. (This makes it easier to parameterize the process, allowing you to provide a “density” parameter to control how many crossings get placed.)

So, the first thing I do is just iterate over the possible cells:

12345

1.upto(height-2) do |cy|1.upto(width-2) do |cx|# ...endend

We start at 1 and go to n-2, because of the first constraint: crossings cannot be placed on the grid boundary.

Within the loop, I then use the “density” parameter (an integer from 0 to 100) to determine whether to skip this cell or not:

nextunless rand(100) < density

Next, I compute the coordinates of the adjacent cells, and then check to make sure the cell really is eligible for a crossing:

12345678

nx, ny = cx, cy-1wx, wy = cx-1, cyex, ey = cx+1, cysx, sy = cx, cy+1nextif grid[cy][cx] != 0 || sets[ny][nx].connected?(sets[sy][sx]) || sets[ey][ex].connected?(sets[wy][wx])

(Remember that sets is a two-dimensional array of Tree objects that allow us to quickly query and join sets.)

If the grid at the chosen point is non-zero, then (by implication) we are adjacent to another crossing, and that isn’t allowed. And if the north and south sets are already connected, or the east and west sets, then we can’t allow a crossing here either (lest we introduce a loop into the graph).

Now that the sets have been updated, we can carve the new passages into the grid:

Recall that the U constant is just used to indicate the presence of an “under” passage. Thus, “E|W|U” means an east/west passage with an implied north/south passage beneath it, and “N|S|U” means a north/south passage with an implied east/west passage beneath it.

Further, we only carve on an adjacent passage if it doesn’t already have the “under” bit set. (If it does, then it is already a four-way crossing, and the passage we’re trying to add is already present, either explicitly or implicitly.)

Finally, we just need to update the edge list, to account for the edges we’ve now implicitly processed by adding this crossing:

12345

edges.delete_if do |x, y, dir| (x == cx && y == cy) || (x == ex && y == ey && dir == W) || (x == sx && y == sy && dir == N)end

Remember that the edge list is just a collection of 3-tuples, with each tuple consisting of the x and y coordinates of a cell, and the direction that the edge exits that cell. It only contains edges that exit cells to the west, or the north (otherwise, we’d get duplicate edges: west from cell x is the same edge as east from cell x+1).

Thus, we’re deleting any edge connecting to the crossing cell itself, the west-facing edge from the eastern cell, and the north-facing edge from the southern cell.

When these loops finish, you’ll have a grid containing randomly-scattered crossings, ready for Kruskal’s algorithm to finish up.

Conclusion

This approach allows you to have weave mazes with a much higher density of crossings than the in-process approach. However, while the in-process approach can generate weave mazes with adjacent crossings, the algorithm described here cannot. I’ve spent some time trying to fix this flaw, but quickly found myself mired in complexity: I’m hoping brighter minds than mine can find an elegant solution to this!

Still, the biggest lesson I took away from this experience is this: it pays to know multiple different ways to solve a problem. I’m not a big fan of Kruskal’s algorithm in general (I don’t like the esthetic of the mazes it generates), but if I had never bothered to learn how Kruskal’s operates, I would never have been able to recognize it’s suitability for connecting the pre-processed crossings.

Stated more generally: the key to appearing smart is not knowing everything, but rather knowing the right thing.

Just because an algorithm may not be your favorite way of solving a problem most times, does not mean it won’t eventually be the best way to solve a problem. You just wait: one of these days I’ll encounter a problem while writing a web app that one of these maze algorithms will be perfect for!

So, give this algorithm a try, if for no other reason than it’s good exercise! And we all know exercise is good for you. 😉 My own implementation is given below:

Click here to view on the original site: Original Post

This is a “weave” maze:

The name does not describe any specific algorithm for generating mazes, but rather simply identifies the class of maze, where some passages weave over or under other passages. It’s a type of 3D maze but it can be rendered nicely in 2D, as long as you conform to the following rules:

Passages may only move under or over each other when they are perpendicular to each other.

A passage may not terminate (dead-end) either under or over another passage.

Passages may not change direction either over or under another passage.

If you break any of those rules, you’ll wind up hidden or ambiguous corridors when viewed as a 2D projection, and we don’t want that!

Let’s take a closer look at how the algorithms need to be adapted to support weave mazes.

Weave maze adaptation

The key change that needs to be made is that when you determine whether a particular move is valid, you no longer get to ignore a cell if it has already been visited. If the candidate cell has been visited, you need to compare it against the three rules above, and rule it out only if it fails one of them.

Consider the following scenario:

The highlighted cell is the current one, the one we need to move from. In a non-weave scenario this would be a deadend, and we’d be done with that cell, but with weaves to consider, we can’t be so hasty.

So, we slow down and check it out. North? No, because the northern passage is not perpendicular to that direction. We’d wind up with an ambiguous passage if we tried to weave in that direction.

West? Nope, that takes us out of bounds.

South? Again, nope, because the passage south is not perpendicular to that direction.

Alright, so: east, our last and best hope? Ah, this one passes the perpendicular test—the corridor to the east moves north-to-south! The space on the far side of the corridor is empty, so we wouldn’t have to worry about deadending underneath the corridor, so again, we’re okay. And because we plan to just tunnel straight east, there’s no danger of changing direction mid-tunnel, either.

We have a winner! At this point, the algorithm would indicate that the passage moves under the north-south passage, and joins to the next cell over.

That’s really the gist of it. As long as the base maze algorithm can be adapted to accomodate this logic, you can implement weave mazes with it.

Try playing with this interactive demo (using the Growing Tree algorithm) to see how this approach to weave mazes works with different settings:

(I’ve not tried it with all of them, and some, like Eller’s, might be difficult to adapt. Also, while you can probably adapt Aldous-Broder or Wilson’s for this, you’ll probably no longer get a uniform spanning tree out of them, if that’s important to you.)

Now, what follows is just one way to implement a weave maze. I’m kind of a bit-twiddler at heart, so that’s the approach I took. If a passage moves under the cell, I just set another bit on that cell, indicating the presence of the tunnel.

The Growing Tree algorithm itself is exactly as I described before, but we add an else clause to check whether a weave is possible, and if so, to perform it:

1234567891011

nx, ny = x + DX[dir], y + DY[dir]if nx.between?(0, width-1) && ny.between?(0, height-1)if grid[ny][nx] == 0# here's the original case, carving into an unvisited neighbor# ...elsif can_weave?(grid, dir, nx, ny)# here's where we perform the weave# ...endend

The “can_weave?” helper method is straight-forward. It just checks whether it is allowable to move from the candidate cell (nx, ny) in the given direction. (Remember that nx and ny are the coordinates of the cell we want to tunnel under!)

1234567

defcan_weave?(grid, dir, x, y) cell = grid[y][x]returnfalseunless cell == CROSS[dir] nx, ny = x + DX[dir], y + DY[dir]return ny.between?(0, grid.length-1) && nx.between?(0, grid[ny].length-1) && grid[ny][nx] == 0end

The first check we do is to see if the passages in the indicated cell are perpendicular to the given direction. (That’s what the CROSS variable is all about; it’s just a hash returning which directions are perpendicular to one another.)

If that test passes, we now need to make sure that the next cell over in that direction is both in bounds, and unvisited. If both of those conditions hold, then we have a winner.

Once we know that a cell can be weaved under, it’s time to perform the weave. At it’s simplest, this is just a matter of setting the “under” bit, and then setting the necessary directions on the origin and target cells:

12345

grid[y][x] |= dirgrid[ny][nx] |= Unx, ny = nx + DX[dir], ny + DY[dir]grid[ny][nx] |= OPP[dir]

To clarify: (x,y) is the originating cell, the one we’ve carving away from. The first (nx,ny) is the cell we want to carve under. Then, we compute the target cell, on the far side of the cell we’re carving under; that’s the cell we’re going to end up at, after tunneling.

That’s really all there is to it. However, for variation, it’s nice to sometimes carve over the cell, instead of under it. This gives you some more interesting patterns, and it’s really not too hard to do. You just need to look at the cell to see which direction the passage is currently heading, and then change it so that the passage is rotated 90 degrees. Then you set the “under” bit. This has the effect of making the perpendicular passage (the one you’re adding) be the “over” passage, and the original passage the “under” passage.

That’s really all there is to it! However, there’s one more interesting bit we need to consider, and that is how to render your weave maze.

Rendering

The display methods I used in my previous maze posts are not sufficient for rendering weave mazes. This is because there is no space between passages, the walls have no width. This makes it impossible to see whether a passage terminates at a given wall, or moves under it.

To work around this, my program uses unicode characters from the box drawing range (x2500—x257F). The result is still not ideal (in larger mazes, the eye tends to lose track of what is a passage and what is the space between passages), but it lets us do a quick-and-dirty display.

For best results, we treat each cell as a 3×2 ASCII grid. For example, a N/S/E/W intersection would be rendered like this:

┘ └
┐ ┌

First, I define the set of tiles for each possible cell configuration:

Weave mazes can produce some very pretty designs, but randomly generated ones are not generally practical for most applications. Because the generator can “escape” from many dead-end situations by moving over or under a neighbor, the mazes tend to have very long passages (with a high “river” factor, meaning they can be quite windy), and very few junctions. Solving such a maze is not really very difficult.

That said, they can look quite impressive! In general, I’ve found that algorithms that encourage longer passages produce the best weave mazes. Algorithms like Kruskal’s and Prim’s (which produce mazes with lots of short deadends) will have a harder time meeting the “passages must not deadend over or under another passage” criterion, and as a result will have fewer over/under passages.

Give weave mazes a shot in the language of your choice, and share what you come up with! My own implementation, in Ruby, is given below.

Click here to view on the original site: Original Post

Later this week I’m going to post an article about how you might implement “weave mazes”, like this one:

This is a maze where some of the passages “weave” over or under other passages. It’s a type of 3D maze, but very constrained (mostly for esthetic reasons):

Passages may only move under or over each other when they are perpendicular to each other.

A passage may not terminate (dead-end) either under or over another passage.

Passages may not change direction either over or under another passage.

Before I post my implementation, though, I wanted to get you thinking: how would you implement a weave maze?

Click here to view on the original site: Original Post

A couple of weeks ago (already!) I asked two simple questions of my Twitter followers: what are (up to) four programming languages that you know and love, and what are (up to) four programming languages that you would like to learn?

Around the time I asked the question, I had almost 4200 followers on Twitter. Of those 4200, 38 responded. That’s almost one percent, and it is most definitely not a large enough sample from which to draw meaningful conclusions! Even if all 4200 of my twitter followers responded, it would still have been a strongly biased survey: most people following me on twitter are (or were) Ruby/Rails programmers, or found me through some other Ruby/Rails programmer. Asking this group what their favorite languages are is bound to have some strong biases.

And, sure enough, the top ten “know-and-love” languages among the 38 respondents were:

ruby (32)

javascript (25)

java (13)

python (9)

c (9)

c# (7)

php (6)

c++ (5)

perl (4)

lisp (4)

Compare that to the top ten “want-to-learn” languages:

erlang (22)

objective-c (12)

clojure (12)

python (12)

lisp (10)

haskell (9)

scala (9)

lua (5)

r (5)

c (5)

Ruby didn’t even make that list! (It actually came in at #11.)

So, what I’m trying to say is, the data is strongly slanted toward the preferences of Rails programmers who enjoy Ruby as a programming language. Still, there are some interesting tidbits to find here. I am definitely not a statistician, but here are four points that emerged that I found enlightening:

Every respondent included at least one object-oriented language among their know-and-love’s. For most people, that was Ruby, but Java, Python, C++, Objective-C, and others were mentioned, too. 17 people mentioned two object-oriented languages, and 17 more mentioned three. Three people gave all four of their know-and-love languages as object-oriented languages. What do I take away from this? All we really know is that object-oriented languages are apparently widely taught/learned among my twitter followers. No surprise there!

Only 8 respondents indicated that they know-and-love a functional language. Six people gave only a single functional language among their four know-and-loves, one more gave two, and one more gave three. Is this because people don’t love functional languages? Definitely not. There is simply not enough data to draw any conclusions about why this number is smaller than OO languages, but it does make you wonder, doesn’t it? Are programmers with predominantly OO backgrounds intimidated by functional paradigms? As you’ll see in the next question, it’s definitely not a lack of interest!

The four functional languages that people gave as “know-and-love” are:

lisp (4)

haskell (3)

erlang (3)

clojure (1)

Now, compare that with the next point:

33 of the 38 respondents wanted to learn at least one functional language. That’s almost 87%, versus only 21% that already knew a functional language. There is some serious curiosity there; people know OO, but want to learn functional. And what functional languages are people curious about?

erlang (22)

clojure (12)

lisp (10)

haskell (9)

scheme (2)

ocaml (2)

ml (1)

f# (1)

Erlang, far and away, is the one most people are curious about. Interesting!

The last point: 32 respondents included at least one object-oriented language in their want-to-learn list. 12 people wanted to learn one OO language, 16 wanted to learn two, and four people wanted to learn three OO languages. Which languages?

python (12)

objective-c (12)

scala (9)

lua (5)

ruby (4)

c# (3)

smalltalk (3)

java (2)

javascript (2)

c++ (1)

io (1)

coffeescript (1)

groovy (1)

Curiouser and curiouser! I love that a predominantly Ruby-centric group wants Python and Objective-C above all others. ObjC actually makes sense to me, even aside from the popularity of iOS as a platform. ObjC has many similarities to Ruby, and is a great language for Rubyists. Python, though…I love that it’s tied for #1. It shows a certain open-mindedness in the Ruby community, which gives me warm fuzzies.

Anyway, I’ll say it again: there really wasn’t a large enough sample set to draw meaningful conclusions here. But it’s still fun to look at seeming patterns in the data. Hopefull you’ve found something useful here, too!

Click here to view on the original site: Original Post

It’s funny how knowledge leads to knowledge. You start digging deeper into one thing, and discover threads leading off into related fields. I began by researching maze algorithms, decided I wanted to see what mazes in more complex tesselations would look like, and after one thing or anouther found myself learning about Wythoff constructions.

The result is Kaleidoscope, a library for generating uniform tilings using Wythoff constructions.

gem install kaleidoscope

A uniform tiling is a tesselation of a plane using regular polygons (with a few other constraints that I won’t go into here). What this means is that, in essense, you give Kaleidoscope a few input parameters, and it hands you a bunch of regular polygons.

1234567891011

require 'kaleidoscope'pattern = Kaleidoscope::Pattern.new(6, 3)pattern.generate! do |point| point.x * point.x + point.y * point.y < 10endpattern.polygons.each do |polygon|# ...end

The polygons are generated around the origin of the plane, within a region you specify via the block to the generate! method. Once done, you can translate and scale the polygons to wherever you like, for rendering purposes. Kaleidoscope will even include basic tricoloring data for the polygons, to make it easy to decorate your pattern.

Some obligatory pretty pictures:

The real reason I got into all this, though, was to find interesting tesselations to build mazes in:

It was a fascinating a project, and I had a lot of fun. Oh, and I did it all test-first—a first for me! I learned a ton about TDD.

That said, I’m ready to move on to other projects. I don’t expect I’ll do much more with this library, although there is definitely more that could be done. I’m releasing the code because others might find it interesting. If you do anything nifty with it, let me know!

Click here to view on the original site: Original Post

Several people have asked how I did the Javascript demos in my maze algorithm articles, and while I’ve answered them a couple of times in the comments, I thought it might be interesting enough to warrent its own post.

The demos are actually implemented in CoffeeScript, a really elegant little language that compiles to Javascript. CoffeeScript lets you ignore (most of) Javascript’s warts, while still enjoying all of Javascript’s strengths. I like.

I’ve tried to anticipate most questions about building, installation, and usage in the readme, but in a nutshell:

You’ll need to install CoffeeScript if you want to do anything with the code.

“cake build” will compile the sources to Javascript.

“examples/index.html” has widgets for all the implemented algorithms for you to play with.

“Maze.createWidget” is the quick-and-easy way to embed a maze somewhere.

You can do it the hard way, too, if you need more control: instantiate a maze with the algorithm of your choice, then call Maze#step until the maze is generated (or Maze#generate to do it all at once).

Note that the implementations there are optimized for animating the algorithms; the source code is not a good place to learn how a typical maze implementation might look. Every algorithm is broken down so that it can be called piecewise, one step at a time. If you were going to implement any of these for any “serious” purpose, odds are you’d do it much more efficiently, and without all the ceremony that csMazes requires.

Still, if you just want to embed an animation of a maze algorithm on a web page, csMazes works quite well. Except for IE7. And probably other IE’s as well. (If you’re an IE guru, I’d appreciate patches, but please make sure your fixes don’t impact the animation performance on other browsers. I was able to make IE render the mazes okay, but then the animation performance on Chrome was abyssmal.)

The code is in the public domain, so do with it what you will. If you do something fun with it, let me know!

Click here to view on the original site: Original Post

(Hello Minecrafters! If you’re looking for random mazes you can build in Minecraft, you might be better served by this page I wrote. It’ll give you block-wise schematics for the maze, and will require less mental translation than the demos here. Just don’t use IE—it won’t work right in that browser. If you want to learn more about random maze generation, though, read on!)

Over the last six weeks I documented eleven different maze generation algorithms. It’s been a blast. I’ve had so much fun researching and implementing these! It’s been great to see some of you taking my advice and applying these to learning a new programming language, as well; I really believe there is a lot of value to be had, there.

I intend to write more maze-related posts, too. Some possible topics include methods for rendering your mazes, ways to implement “weave” mazes (mazes where passages pass over and under other passages), non-rectangular tesselations, and so on. We’ll see where it all goes.

To wrap up the algorithm series, here’s a quick recap of each of the eleven algorithms:

(Note: if you’re using a feed-reader, or reading this on a parasite-blog, you’ll probably be missing out on the Javascript demos. I’m not just trying to drive traffic, here, I really do think you’ll get more out of the article by reading it at the source!)

The recursive backtracker was my go-to algorithm for years. It’s fast, easy to implement, and generates mazes that are (to my eyes, at least) quite esthetically pleasing. Of all the algorithms, it generates the fewest dead-ends, and as a result has particularly long and winding passages. It’s especially hypnotic to watch the algorithm in action!

Eller’s algorithm is perhaps the most challenging to understand and implement of the algorithms I covered. Its clever use of sets allow it to generate infinitely tall mazes while only needing to examine a single row at a time. Esthetically, it strikes a nice balance between “long and winding” and “lots of cul-de-sacs”.

Kruskal’s algorithm is actually an algorithm for generating a minimum spanning tree (MST) for a graph. What I covered is actually a variation on that, which selects edges at random instead of by weight. The resulting mazes tend to have a lot of short cul-de-sacs. Implementation-wise, this algorithm might rank second in difficulty, some distance behind Eller’s algorithm.

Prim’s algorithm is another MST algorithm. Like the variation I described for Kruskal’s, the Prim variant I covered also selects edges at random rather than by weight, allowing the creation of random mazes. Like Kruskal’s, the resulting mazes tend to be heavy on the cul-de-sacs, but the implementation is pretty straight-forward.

The recursive division algorithm is novel in that it is the only one I covered that uses a “wall adding” technique, rather than a “passage carving” one. It is also novel in its fractal-like method of growing the maze. Theoretically, you could use this algorithm to generate infinitely large mazes by building sections on-demand, increasing the resolution as needed by repeating the algorithm on the currently focused area.

I included the Aldous-Broder algorithm mostly for completeness; the algorithm is optimized for generating “uniform spanning trees” (spanning trees selected at random, and uniformly, from the set of all possible spanning trees). Sadly, this constraint means the algorithm itself is very inefficient!

Wilson’s algorithm is another method for generating uniform spanning trees. It converges more rapidly than Aldous-Broder, but still is much less effective as a general maze generator than any of the other algorithms I covered.

The Hunt-and-Kill algorithm is similar to the recursive backtracker (they both tend to generate long, winding passages), but this algorithm will search the grid, iteratively, looking for a new blank cell when it encounters a dead-end. A variation on this algorithm was my first introduction to maze generation, almost twenty years ago!

My new favorite, the Growing Tree algorithm, can work identically to the recursive backtracker when implemented one way, and with another small tweak can be made to work very similarly to Prim’s algorithm. It all depends on how cells are selected from its set of active cells. In fact, attributes of both algorithms can be gained by clever combinations of cell selection methods! It’s quite a flexible algorithm.

The Binary Tree algorithm is an almost-trivially simple one, but you pay for that simplicity. The mazes it generates tend to have blemishes (long corridors spanning two sides) and a notable bias (routes tend to run diagonally). Still, for some applications this can be quite appropriate. Besides, with this algorithm you could theoretically have an infinitely large maze by generating only the area you need, on demand!

The last algorithm I covered was the Sidewinder algorithm. This is related to the Binary Tree algorithm, but does not have as noticable a bias, and there is only one long passage across the top.

Thanks to everyone who has followed along for the last six weeks. Thanks especially go to those who shared their own implementations; I love seeing how these algorithms translate into other languages.

So, that’s it for the algorithms, but stay tuned for more maze-related topics in the coming weeks!

Click here to view on the original site: Original Post

Last of the (eleven!) maze algorithms on my list is the Sidewinder algorithm. With a name like that, it’s got to be cool, right?

Well, possibly. It’s got its problems, but it’s quick and easy to implement, and allows arbitrarily tall mazes. It’s closely related to the Binary Tree algorithm, but manages to get away with only one side being spanned by a passage, instead of two.

In a nutshell, it goes like this:

Work through the grid row-wise, starting with the cell at 0,0. Initialize the “run” set to be empty.

Add the current cell to the “run” set.

For the current cell, randomly decide whether to carve east or not.

If a passage was carved, make the new cell the current cell and repeat steps 2-4.

If a passage was not carved, choose any one of the cells in the run set and carve a passage north. Then empty the run set, set the next cell in the row to be the current cell, and repeat steps 2-5.

Continue until all rows have been processed.

So, while it’s related to the Binary Tree algorithm, it’s a bit more complicated. However, words don’t do it justice; it really is a lot more straightforward than it sounds.

Let’s walk through an example.

An example

I’ll use a 4×4 grid, and I’ll move fairly quickly, so pay attention. (You in the back! Eyes to the front!)

I’ll start with the first (top) row, although the algorithm works just fine if you start with the last one and work up. Starting at the top lets us get the one special case out of the way: the entire first row must be a single passage (because you can’t carve north from the northernmost row):

There, we’ve got that done with. Now, let’s get to the interesting bit. Let’s say we manage to carve 3/4 of the way across the row before we decide (randomly and arbitrarily) not to carve eastward:

The cells in red are the current “run set”. Since we’ve ended the current run of horizontal cells, we now choose one of these cells at random, and carve a passage north. Thus:

Then we clear out the run set:

Make the next cell the current cell:

And try to decide whether to carve eastward or not. However, at this point, we can’t carve east. It would take us outside the bounds of the maze, which is not allowed. So we have to end the run there, and choose one of the cells in the set to carve north from. The decision is out of our hands: with only a single cell in the set, we have to choose it. So we carve north:

Then we start the next row. Notice how each row is independent of the row preceding it, much like Eller’s algorithm.

For the next row, we decide immediately that we want to stop carving east on the very first cell:

So we stop, choose a cell from the run set (easy) and carve north:

Continuing, we connect the next two cells:

Randomly choose one of them to carve north from:

And continue:

And since we’re at the end of the row, we’re again forced to stop and carve north:

For the last row, let’s say we connect the first two cells:

Carve north and clear the set:

Then connect the last two cells:

And finish it off by adding a final northward connection:

And that’s our maze, courtesty of the Sidewinder.

Feel free to play with the following demos, to get a feel for how the Sidewinder works on larger grids. The single corridor spanning the top row is hard to miss, but see if you can spot any other properties of the algorithm.

All this talk of “run sets” might make you think you actually need to use a set. Well, you can, sure, if you want to. No one’s going to stop you, right? But if you notice that each set consists of consecutive cells, you can take a short-cut and just keep track of the cell that started the run. The “set” is then implied as the sequence of cells between the start of the run, and the current cell.

Aside from that, the algorithm has very few surprises. As you’d expect you need to iterate over each row:

1234

height.times do |y| run_start = 0# ...end

At the start of each row I initialize the beginning of the current run to the first cell.

Then, within that loop, I iterate over each cell in the current row:

123

width.times do |x|# ...end

Within that loop is where the magic really happens. There are basically two conditions:

12345

if y > 0 && (x+1 == width || rand(2) == 0)# end current run and carve northelsif x+1 < width# carve eastend

Remember how the first row is always a single corridor? That “y > 0” guard prevents us from ever ending the current run if we’re on the first row. Similarly, the “elsif” clause prevents us from carving east if we’re at the last cell in a row.

The second condition in the first clause makes sure that we always end the current run at the end of the current row (x+1 == width), but also allows us to randomly end the current run (rand(2) == 0).

Ending a run requires us to randomly choose a cell from the current run set, and carve north from it:

We also reset the run set to begin at the next cell.

The last bit of code, from the elsif clause, is simple: carving east is merely setting a bit:

12

grid[y][x] |= Egrid[y][x+1] |= W

And that’s all there is to it. Like I said, it’s easier to implement than describe.

Conclusion

With a name like “Sidewinder” you can’t help but wish the algorithm was cooler than it is. I’m actually not sure where the name comes from; a Google search reveals a routing algorithm with the same name, but I wasn’t able to find any solid information to compare with the one I describe here. (If anyone has more information, I’m definitely curious.)

The Sidewinder’s greatest “flaw”, if it can be called that, is that it is all but trivial to start at the bottom row of the maze, and work your way to the top. Any maze generated by the Sidewinder algorithm will never have any north-facing dead-ends, which means you can never get stuck when moving from south to north. (This is similar to the Binary Tree algorithm, which will never have any dead-ends in the direction of its bias.) And for quickly generating arbitrarily tall mazes, Eller’s algorithm is probably better (with fewer drawbacks), but is harder to implement.

Would I actually use the Sidewinder algorithm for anything? I’m not sure. I hesitate to say “never”, but I certainly can’t think of anything I’d use it for that another algorithm wouldn’t do better! But it was fun to implement, so it has entertainment value, if nothing else.

Give it a try yourself and share how it goes! For reference, my complete implementation follows:

<noscript>sidewinder.rb on gist.github.com</noscript>

Click here to view on the original site: Original Post

This next algorithm is crazy, crazy simple. It also is the only one of the algorithms I’ve covered with the ability to generate a perfect maze without keeping any state at all. It can build the entire maze by looking at only a single cell at a time.

Here’s how it works: for every cell in the grid, randomly carve a passage either north, or west.

That’s it. You can mix it up if you want, and choose between other diagonal sets: north/east, south/west, or south/east, but whichever diagonal set you select must be used consistently throughout the entire maze.

There’s another nifty attribute of this algorithm: if you can guarantee that for any given cell, you will always carve a particular direction, then you never need to keep any of the maze in memory. When you need to display some portion of the maze, you just “rebuild” it, trivially. This means you could create infinitely large mazes in very little memory.

It’s not all roses, though. A side-effect of this algorithm is that it has a strong diagonal bias. Also, two of the four sides of the maze will be spanned by a single corridor. But for some applications, that might be acceptable.

It’s almost too simple to bother, but let’s take a quick look at the algorithm in practice.

An example

I’ll use a trivial 3×3 grid.

Because this algorithm doesn’t need to consider the state of any adjacent cells, we can start from and proceed to anywhere we want; let’s begin with the bottom-right corner:

Then, we randomly carve a passage either north, or west. Let’s choose…north!

Let’s work through the rest of that bottom row similarly, carving west from the middle cell, and north from the south-west corner:

Note that for the bottom-left (south-west) corner, “west” was not an option, since it would take us outside the bounds of the grid. Thus, our hand was forced and we had to choose north. But we’re okay with that, right?

Let’s move on and process the second row:

Again, we were limited in our choices in the western-most cell: all we could choose was north, so we did.

Now, look what happens for the final row:

Because we can’t go north in any of those, we have to choose west. And for the north-west corner, we can’t choose either north, or west! So we simply mark it visited and trust that the algorithm will connect it by other means.

Notice the corridors spanning the north and west boundaries. This is one of the side-effects I mentioned. Also, while it’s not so visible in a maze this size, the maze has a strong north-west bias; there are no dead-ends (and never will be any dead-ends) facing either north or west. Thus, if you start at the south-east corner, getting to the north-west corner is trivial—you’ll never hit a dead-end. (Going the other direction is harder; it’s like carving wood with the grain, versus against the grain.)

Play around a bit with the following demo to see what I mean. You can also select a different bias here, to see how that changes things.

The implementation of this algorithm is as trivially simple as you might think. You only need to cover every cell of the grid. One way to do that is to simply iterate over every row and column, in order:

12345

height.times do |y| width.times do |x|# ...endend

Within the loops, you determine which directions are valid:

123

dirs = []dirs << Nif y > 0dirs << Wif x > 0

And then you randomly select one:

123

if (dir = dirs[rand(dirs.length)])# ...end

Once you’ve selected your direction, you just carve a passage between the two cells:

123

nx, ny = x + DX[dir], y + DY[dir]grid[y][x] |= dirgrid[ny][nx] |= OPPOSITE[dir]

And you’re done!

Conclusion

The sharp-eyed among you might have noticed the connection between the name of the algorithm, and its output. Yes, the “Binary Tree” algorithm creates a random binary tree! Imagine that.

Every cell has one connection toward the “root” (in the direction of the bias, e.g. north or west), and zero, one, or two connections in the other direction (south or east). The root is, in this case, the north-west corner. If you could reach in and pluck up the north-west corner, giving it a good tug, it would uproot its children, and their children, and their children, until you’re left with a binary tree dangling downward like the knobbly roots of some tenacious weed.

Now, there are obviously other ways to build mazes based on binary trees. A particularly fun one is to make a variation of the Growing Tree algorithm: select a root node anywhere in the grid and add it to the list. Then, while the list is not empty, you remove a node from the list, add up to two passages leading away from it, and add those neighbors to the list. (I call this the “Growing Binary Tree algorithm”.) You’ll end up with another random binary tree, but without the obvious blemishes of the Binary Tree algorithm.

Nevertheless, the Binary Tree algorithm is fun to implement, and can be perfectly suitable for some applications (for instance, if you’re inside the maze, the biases will be less obvious). And if you need stupendously enormous mazes that can’t exist entirely in memory, why, this is an excellent algorithm as well.

My full implementation is below. Give it a try yourself, and share what you come up with!

Click here to view on the original site: Original Post

Remember way back in the first article of this series, where I said that Recursive Backtracking was my favorite for generating mazes? Well, I changed my mind. My new favorite is the Growing Tree algorithm.

This algorithm is pretty cool. Configured one way, it mimics the behavior of the Recursive Backtracking algorithm. Configured another, it works almost exactly like Prim’s algorithm. Another trivial change, and you can generate mazes with attributes of both.

It’s really pretty slick. Here’s how it works:

Let C be a list of cells, initially empty. Add one cell to C, at random.

Choose a cell from C, and carve a passage to any unvisited neighbor of that cell, adding that neighbor to C as well. If there are no unvisited neighbors, remove the cell from C.

Repeat #2 until C is empty.

Pretty straight-forward, really. But the fun lies in how you choose the cells from C, in step #2. If you always choose the newest cell (the one most recently added), you’ll get the recursive backtracker. If you always choose a cell at random, you get Prim’s. It’s remarkably fun to experiment with other ways to choose cells from C.

Let’s look at a simple example.

An example

I’ll demonstrate the “choose newest” cell selection method, which will hopefully be enlightening enough that you can imagine on your own how the “choose random” method might proceed.

The algorithm begins by adding an arbitrary cell to the list. We mark the cell as “in”; I’m also going to assign the cell a number, visually, to help keep track of the relative age of each cell. Also, cells in red are in the list of “live” cells; they’ll go white once they’ve been removed from the list.

So, onward!

1

Next step: we choose the newest cell from the list, randomly select one of its unvisited neighbors, carve a path to it, and add the neighbor to the list:

1

2

Let’s do it again, once more choosing the newest cell from the list:

1

3

2

See the pattern? By always choosing the cell most recently added to the list, each subsequent step simply extends the passage one more step, effectively doing a random walk. But how does the algorithm behave when the passage cannot be extended any further?

Let’s fast forward a bit and see what the behavior is. Here we are now, six iterations further along, and we’ve hit a dead-end at cell #9.

1

3

2

7

8

4

5

6

9

At this point, the algorithm would select the newest cell (#9), and then try to find an unvisited neighbor cell. Alas, there isn’t one! So cell #9 is voted off the island removed from the list:

1

3

2

7

8

4

5

6

9

Then the algorithm goes around again, grabbing the newest cell from the list. This time, that’s #8, and sure enough, there is an unvisited adjacent cell that we can move to:

1

10

3

2

7

8

4

5

6

9

Did you see that?! It backtracked!

Not really. It wasn’t intentionally backtracking; it just happened that the algorithm’s choice of cell was the same one that the backtracking algorithm would have chosen. And it will continue to choose identically to the backtracker, clear up until every cell has been visited and it has to “backtrack” all the way back to the original cell. At that point, the list of cells will be empty, and the algorithm will terminate.

Pretty cool!

Take a moment and think about how the algorithm would change if, instead of choosing the newest cell each time, we chose one at random. I’ve already told you it would behave like Prim’s algorithm in this case, but can you see why? I hope so, because I’m not going to draw any more diagrams tonight. 🙂

Instead, you can play with the following demo, which has presets for several different cell-selection methods. Choose a few different ones and compare how they behave.

Implementation-wise, this is one of the simplest algorithms (another reason to love it!). In fact, in my own implementation, the most complex part came from code that I added that lets you specify the cell-selection method when the script is executed!

It starts by selecting a random cell and adding it to the list:

12

x, y = rand(width), rand(height)cells << [x, y]

The program them simply loops until the list is empty:

123

until cells.empty?# ...end

Within the loop, we first select the cell to operate on. I’m going to mask my own program’s complexity here behind a simple “choose_index” method; it takes a number and returns a number less than that.

12

index = choose_index(cells.length)x, y = cells[index]

Next, we iterate over a randomized list of directions, looking for an unvisited neighbor. If no such neighbor is found, we delete the given cell from the list before continuing.

12345678

[N, S, E, W].shuffle.each do |dir| nx, ny = x + DX[dir], y + DY[dir]if nx >= 0 && ny >= 0 && nx < width && ny < height && grid[ny][nx] == 0# ...endendcells.delete_at(index) if index

When a valid, unvisited neighbor is located, we carve a passage between the current cell and that neighbor, add the neighbor to the list, set index to nil (to indicate that an unvisited neighbor was found), and then break out of the innermost loop.

And that’s really all there is to it. Some possible implementations of the choose_index method might be:

123456

defchoose_index(ceil)return ceil-1if choose_newest?return0if choose_oldest?return rand(ceil) if choose_random?# or implement your own!end

Try it out and see what new cell selection methods you discover!

Conclusion

Personally, I love the flexibility of this algorithm. It’s surprisingly easy—and addicting!—to try crazy new ideas for selecting cells, and the algorithm itself is almost trivially simple to implement. Performance-wise, the algorithm is one of the faster ones (assuming you can make removing a cell from the list reasonably fast), and while memory use is proportional to the size of the maze itself, that’s not unusual for most of these algorithms.

Seriously, what’s not to like?!

Give it a try and link to your implementations in the comments. Share your ideas for new cell selection methods, too—I’d love to see what you come up with!

For reference, the complete source of my implementation, in Ruby, is below. Remember that most of the code in my implementation is for cell selection and displaying the maze; the algorithm itself is only the tiny bit at the end!

Click here to view on the original site: Original Post

(Note: if you’re reading this in a feed reader, you’re going to be missing out on the illustrations and demonstrations.)

Alright, so the theme last week was “algorithms for generating uniform spanning trees.” If this week has a theme, it might be something like “algorithms that sometimes act like the recursive backtracker, but only kind of.”

Today’s algorithm is the “hunt-and-kill algorithm”. Sounds violent, doesn’t it? It’s actually quite tame. In a nutshell, it works like this:

Choose a starting location.

Perform a random walk, carving passages to unvisited neighbors, until the current cell has no unvisited neighbors.

Enter “hunt” mode, where you scan the grid looking for an unvisited cell that is adjacent to a visited cell. If found, carve a passage between the two and let the formerly unvisited cell be the new starting location.

Repeat steps 2 and 3 until the hunt mode scans the entire grid and finds no unvisited cells.

Let’s walk through an example.

An example

I’ll just use a basic 4×4 grid:

Now, I’ll give you the walk phase as a sequence of frames here; it’s not that interesting, really, until it reaches a dead-end.

And there our liesurely inebriated stroll comes to a screeching halt. All possible directions lead either out of bounds, or into an already-visited neighbor. At this point, the recursive backtracker would begin backtracking, looking for a previously visited cell in the stack that had unvisited neighbors. The hunt-and-kill algorithm is not nearly so sophisticated: stuck? Go hunting.

And so we hunt. Beginning at the first row, we begin scanning each row for an unvisited cell with a visited neighbor. It turns out to be our lucky day: our very first cell is a match: unvisited, with a visited neighbor. We connect the two:

And then we start a random walk from our new starting point:

Stuck again, so we go hunting. There are no cells in the first row that match:

And no matches in the second row, either. (Remember, we’re looking for unvisited cells with visited neighbors.)

The third row, however, has a match in its last cell:

So, we connect that unvisited cell to any one of its visited neighbors (at random), and do our random walk:

And we again stub our digital toes (see what I did there?) on another dead-end. We’re stuck, so we go hunting again, looking row-by-row for an unvisited cell.

The scan completed without finding any unvisited cells, so the algorithm terminates and leaves us with our maze.

Try the following demonstrations to see how the algorithm behaves at larger resolutions:

This algorithm is pretty straightforward to implement. It even has a few simple optimizations that you can do, though I won’t touch on those here.

My implementation begins by choosing a random starting point, and then looping over the two phases, “walk” and “hunt”, until the hunt phase terminates without finding any new location. My walk and hunt implementations both return either a two-element array (indicating the coordinates of the next starting location), or a nil (indicating that the phase in question has terminated).

1234567

x, y = rand(width), rand(height)loop do x, y = walk(grid, x, y) x, y = hunt(grid) unless xbreakunless xend

The walk implementation is straightforward. It just iterates over a randomized list of directions, and returns nil if none of the directions are valid:

1234567

defwalk(grid, x, y) [N, S, E, W].shuffle.each do |dir|# ...endnilend

For each direction, it computes the neighbor in that direction, and then tests it to see if it is within the bounds of the maze and unvisited:

1234

nx, ny = x + DX[dir], y + DY[dir]if nx >= 0 && ny >= 0 && ny < grid.length && nx < grid[ny].length && grid[ny][nx] == 0# ..end

When a neighbor is found that fits the bill, the neighbor and the current cell are connected, and the neighbor is returned as the new current cell:

The hunt phase, on the other hand, is a little more involved (but not much). It iterates over each cell in the grid, row-wise, skipping visited cells, and returning nil if it does not find any unvisited cellls:

1234567891011

defhunt(grid) grid.each_with_index do |row, y| row.each_with_index do |cell, x|nextunless cell == 0# ...endendnilend

Within that innermost loop, if the cell is unvisited, we compute a list of neighbors that are already part of the maze:

Then, we randomly choose one of them (skipping to the next cell if there are no neighbors to select):

1

direction = neighbors[rand(neighbors.length)] ornext

Then, we compute the coordinates of the neighbor in the chosen direction, carve a passage to it, and then return the new current coordinates:

123456

nx, ny = x + DX[direction], y + DY[direction]grid[y][x] |= directiongrid[ny][nx] |= OPPOSITE[direction]return [x, y]

And there’s your hunt-and-kill algorithm!

Conclusion

The random walk phase tends to produce long, windy passages that are reminiscent of the recursive backtracking algorithm, so if you like the aesthetic of one, you’ll probably enjoy the other, too. Both algorithms produce mazes with fewer dead-ends than most of the other algorithms.

The hunt-and-kill algorithm will always have a place in my heart, because it was a variation on it that introduced me to maze generation back in high school. It was first shown to me by Stefano Mazzocchi when he was an exchange student at my alma mater (Siuslaw High School, in tiny Florence, Oregon). In fact, it was this variation on the hunt-and-kill that I used when I wrote a (somewhat popular) random dungeon generator for D&D back in 2001! (All that remains now is some C source code on GitHub, alas.)

Anyway, enough nostalgic wallowing. Have a go at the hunt-and-kill algorithm, and share links to your creations in the comments. For reference, the complete source for my own implementation is here:

Click here to view on the original site: Original Post

The last algorithm I mentioned, Aldous-Broder, was (if you’ll recall) originally devised as a way to generate uniform spanning trees (UST). (Review: a spanning tree is a tree that connects all the vertices of a graph. A uniform spanning tree is any one of the possible spanning trees of a graph, selected randomly and with equal probability.)

Aldous-Broder was effective, but inefficient. For this (the seventh!) article in my series on maze algorithms, I’ll focus on Wilson’s algorithm for selecting uniform spanning trees.

The algorithm goes something like this:

Choose any vertex at random and add it to the UST.

Select any vertex that is not already in the UST and perform a random walk until you encounter a vertex that is in the UST.

Add the vertices and edges touched in the random walk to the UST.

Repeat 2 and 3 until all vertices have been added to the UST.

So, it’s still doing the random walk, but I think you’ll see that this algorithm converges much more rapidly than Aldous-Broder. It’s still annoying to watch an animation of the early stages of the process, but not nearly as bad as the other.

As before, here’s a simple example of the algorithm in action.

An example

A 4×4 grid will suffice for this demo.

We begin by randomly adding a cell to the maze:

Then, we pick another unvisited cell at random and do a random walk until we encounter that first cell. Note that this random walk has a few constraints: although it can cross cells it has already walked (as long as they are not already in the maze), we don’t want to allow the final path to have any loops in it. Thus, we also record the direction most recently used to exit each cell, and will use those directions to form the final path once the walk encounters an “in” cell.

↑

←

↑

↑

←

↑

→

↑

←

↑

→

↓

↑

←

↑

→

↓

↑

→

↑

→

↓

↑

→

↑

↑

Once we reach a cell that is already part of the maze, the walk terminates. The next phase simply goes back to the cell at the beginning of the walk, and follows the arrows, adding vertices and edges to the maze as we go, until we reach the final cell of the walk.

→

↓

↑

All the other cells that were visited during the walk, but which did not make the “final cut”, as it were, are simply reset to “out”.

Now, we do it again. Notice that this time there are four cells in the maze, rather than just one, giving us many more targets for the walk to hit. This is what lets the algorithm converge more rapidly than Aldous-Broder: each pass through the algorithm increases the odds that the next pass will terminate sooner.

Here we go, another pass. I’m going to “cheat” and let this one consume more cells than it might (probabalistically) do otherwise. Just to keep the article shorter. 🙂

→

→

↓

→

↓

←

→

↓

↓

←

→

↓

↓

←

→

Then we walk the path and add the vertices and edges to the maze, again:

By now you should be able to see the pattern emerging. Each pass would add another random “branch” to the tree, proceeding until all cells have been added to the maze.

Try the following demos if you’d like to see an animation of the algorithm in action, or step through it more carefully:

Personally, I found Wilson’s algorithm to be one of the messiest to implement, mostly due to the necessity of keeping track of directions while doing the random walk. I’m sure there are cleaner ways to implement this. Yes, that is a challenge! (Admittedly, some of the complexity of my implementation is due to it trying to animate the algorithm; feel free to simplify your own attempt.)

The algorithm begins by setting a random cell as “in”, and recording how many cells still remain outside:

Then, I simply loop until there are no more cells that are outside the maze:

123

while remaining > 0# ...end

Within the loop, the random walk is performed. For now, I’m going to wave my hands and we’ll move on, but I’ll describe the walk method more closely in a bit. It returns the path that was taken as an array of (x, y, direction) tuples:

123

walk(grid).each do |x, y, dir|# ...end

For each cell in the random walk, we add it to the maze (by applying the given direction bit to each cell in the path), and then decrement the number of remaining cells:

123456

nx, ny = x + DX[dir], y + DY[dir]grid[y][x] |= dirgrid[ny][nx] |= OPPOSITE[dir]remaining -= 1

Now, the walk method itself is the messy bit. It begins by looping until a random cell is selected that is not in the maze:

Once a starting point is found, I initialize a hash of visited cells (where I will record both the coordinates of each cell in the walk, as well as the direction last used to exit the cell). And then I start walking:

12345678

visits = { [cx, cy] => 0 }start_x, start_y = cx, cy # where the random walk startedwalking = truewhile walking# ...end

Now, inside the walking loop, I first set walking to false (since I’m going to be optimistic and assume that the next move will hit an “in” cell). Then, I look at a randomized list of directions:

1234

walking = false# cross fingers![N,S,E,W].shuffle.each do |dir|# ...end

For each direction in the randomized list, we’ll compute the neighbor in that direction and then check that the neighbor is actually a valid cell (within the boundaries of the grid):

1234

nx, ny = cx + DX[dir], cy + DY[dir]if nx >= 0 && ny >= 0 && ny < grid.length && nx < grid[ny].length# ...end

Once a valid neighbor is found, we immediately record that direction as the exit vector from the current cell.

visits[[cx, cy]] = dir

Then, if the neighbor cell is already in the maze, we get to break out of the loop (because we’ve finished the walk). Otherwise, we set the neighbor to be our current cell, and we continue walking:

1234567

if grid[ny][nx] != 0breakelse cx, cy = nx, ny walking = truebreakend

Once we’re done walking, we still need to convert the visits hash to a valid path, which we’ll return. I do this by following the visits from the start cell, through the direction used to exit visited cell, until the path ends:

1234567

path = []x, y = start_x, start_yloop do dir = visits[[x, y]] orbreak path << [x, y, dir] x, y = x + DX[dir], y + DY[dir]end

Finally, we return the path, which the top-level loop uses to add the cells to the maze:

return path

Whew!

Conclusion

Wilson’s is still not a very efficient algorithm: on large grids, the first walk can be excruciatingly slow. However, it’s much nicer than Aldous-Broder, and is still guaranteed to return a uniform spanning tree.

I’m sure any of you could do a better job implementing this than I’ve presented here. In fact, I’d love for you to prove it. 🙂 Give it a try and link to your implementations in the comments!

For reference, the complete source for my own (Ruby) implementation is here:

<noscript>wilsons.rb on gist.github.com</noscript>

Click here to view on the original site: Original Post

The next maze algorithm I’m going to cover, the Aldous-Broder algorithm, is one of the simplest imaginable. It’s also one of the least efficient. In fact, it is so inefficient that you might wonder why anyone would even bother implementing it, let alone discovering it and having their name attached to it.

D. Aldous and A. Broder were two researchers, working independently, who were investigating uniform spanning trees. A spanning tree, as you may or may not already know, is any tree that connects all the vertexes in a graph. A uniform spanning tree is one chosen randomly (and with equal probability) among all the possible spanning trees of a given graph.

Riveting, isn’t it?

There are actually applications for this. Outside of recreational maze generation, I mean. I don’t know what they are myself, but Wikipedia informs me that they are in probability and mathematical physics. (If you know more specifically how uniform spanning trees are being applied in research, please share in a comment!)

Anyway: Aldous and Broder were researching these uniform spanning trees, and independently arrived at the following algorithm:

Choose a vertex. Any vertex.

Choose a connected neighbor of the vertex and travel to it. If the neighbor has not yet been visited, add the traveled edge to the spanning tree.

Repeat step 2 until all vertexes have been visited.

Remember: this algorithm is notable in that it selects from all possible spanning trees (i.e. mazes) of a given graph (i.e. field) with equal probability. The other algorithms I’ve covered don’t have this property.

Let’s walk through a trivial example. If you haven’t spotted yet why the algorithm is so ineffecient, hopefully the example will demonstrate it.

An example

For this demonstration, we’ll use a 3×3 grid. Any bigger than that and we could be here all day.

We choose a cell at random:

And travel to a random neighbor:

And since the neighbor had not been visited before, we carve a passage between the two:

Simple enough! Let’s do a few more quick iterations: choose a random neighbor, and carve a passage to it (but only if it hasn’t been visited before):

Now, watch what happens next. We choose a neighbor at random, but two of the three neighbors are already visisted, which means there’s a very good chance of one of them being picked. Sure enough:

That neighbor’s already been visited! However, this move is allowed by the algorithm; the only constraint is that because the neighbor has been visited before, we don’t carve a passage to it from the previous cell.

Sure, alright. So, we’ll go with it. I think you’re seeing the pattern here, so I’ll just march through the rest of the process:

I hope you noticed how the cursor seemed to meander a bit, there at the end. If you run an animation of Aldous-Broder, you’ll probably find yourself shouting “JUST GO OVER THERE” to the cursor, once or twice. This is because Aldous-Broder, while easy to implement, lacks any real heuristics. And you can’t really add any heuristics without changing the probability curve.

Try the following demo to see just how frustrating Aldous-Broder can be to watch:

I guess it really comes down to what you want out of a maze generator. If you absolutely must have a perfectly uniform spanning tree, then your options are limited.

Implementation

One of the really nice things about Aldous-Broder is how trivial it is to implement. If you can implement the grid data structure, the algorithm itself is almost hardly worth mentioning.

But I’ll mention it anyway.

My implementation begins by choosing a random point, and then computing the number of cells in the grid that remain unvisited:

Regardless of whether the cell has been visited or not, we then make the new cell the current cell, and break out of the innermost loop (the one where we look for a valid direction):

12

x, y = nx, nybreak

And that’s it. All that remains is to fire it off and sit back while it does it’s little drunkard’s walk all over our nice clean grid.

Conclusion

Aldous-Broder isn’t the most efficient algorithm on the block, or the smartest, but it is definitely well-suited for the problem domain that it was created for: finding uniform spanning trees. However, even there, there are more efficient algorithms. Next time, I’ll demonstrate Wilson’s algorithm, a (slightly) more efficient way to find a uniform spanning tree.

In the meantime, try your hand at Aldous-Broder. It shouldn’t set you back more than a couple of minutes, and it really is pretty fun.

For reference, here is my own (Ruby) implementation of Aldous-Broder:

Click here to view on the original site: Original Post

All of the maze algorithms I’ve covered so far (recursive backtracking, Eller’s, Kruskal’s, and Prim’s) were implemented as “passage carvers”: they started with a solid grid and hollowed the passages out of it. Some algorithms can be implemented a different way (and indeed, some must be implemented this way): as “wall adders”, where the process begins with a large empty space, and adds walls until a maze results.

The “recursive division” algorithm is one that must be implemented as a wall adder. This algorithm is particularly fascinating because of its fractal nature: you could theoretically continue the process indefinitely at progressively finer and finer levels of detail.

It works like this:

Begin with an empty field.

Bisect the field with a wall, either horizontally or vertically. Add a single passage through the wall.

Repeat step #2 with the areas on either side of the wall.

Continue, recursively, until the maze reaches the desired resolution.

You’ll find that at every step, you still have a valid maze. Each repetition of the algorithm simply increases the level of detail. Pretty cool!

Let’s walk through an example to get an idea of how this works in practice.

An example

We’re going to start with a larger grid this time, 5×5, just to show how the algorithm quickly converges. (If there is too much hand-waving, call me on it in the comments! I’ll be more than happy to expand wherever things get too obtuse.)

First, we’ll bisect it vertically, because I’m just that kind of a guy. I’ll just close my eyes and point…here…so we’ll add a wall at x=3:

Then, we randomly add a gap in the wall, so that we preserve the connectability of the graph:

And that finishes the first iteration. Now, we repeat these steps on each of the two subfields that we created by adding that wall. Let’s go with the field on the right, first.

Now, theoretically, we could bisect that either vertically or horizontally. However, for best results, if you’re bisecting a field that is taller than it is wide (like we are now), it’s best to bisect horizontally. It’ll prevent really glaring biases in the finished product, and generally makes for more interesting mazes. (Similarly, when bisecting a field that is wider than it is tall, you should prefer a vertical wall.)

So, let’s cut this field in half horizontally this time, and then add a passage through it:

(From here on out, I’m going to merge the “add wall” and “erase passage” steps into a single step: you’ve been warned!)

That completes another iteration. Let’s do one more quick iteration on the top-right field, now, and see how the recursion bottoms out when you hit your target resolution.

The field in the top-right is a square, so we can randomly choose between a horizontal or a vertical bisection. Let’s go with vertical:

Now, we’ve got two new subfields to recurse into, but hark! We can’t divide them any further without passing our target resolution (our grid is only 5×5, remember). So the recursion bottoms out, and we rewind back up the stack.

I’m going to be moving much faster now, chugging along to the end of the algorithm.

And we’re done! The result is a perfect maze.

If you’re not using IE, you can step through the algorithm yourself using this nifty Javascript demo. The one of the left is a 5×5 grid, and the one on the right is a 15×15 grid (so you can get a better feel for how the recursion works on larger mazes).

The choose_orientation method gave me a simple abstraction for deciding which way a field with the given dimensions ought to be bisected. Once I had that in hand, I just needed a divide method, which recursively divided the field as described:

123

defdivide(grid, x, y, width, height, orientation)# ...end

The algorithm was started by calling divide on the entire grid:

The method itself first checks to see if the maze has reached the target resolution:

1

returnif width < 2 || height < 2

Then, it sets a convenience flag so we can query the orientation, and starts answering questions about where the wall needs to be drawn, and where the passage through the wall will be:

123456789

horizontal = orientation == HORIZONTAL# where will the wall be drawn from?wx = x + (horizontal ? 0 : rand(width-2))wy = y + (horizontal ? rand(height-2) : 0)# where will the passage through the wall exist?px = wx + (horizontal ? rand(width) : 0)py = wy + (horizontal ? 0 : rand(height))

Then a few helper values get set, to aid in drawing the wall:

123456789

# what direction will the wall be drawn?dx = horizontal ? 1 : 0dy = horizontal ? 0 : 1# how long will the wall be?length = horizontal ? width : height# what direction is perpendicular to the wall?dir = horizontal ? S : E

Once that’s all set up, we can actually draw the wall:

12345

length.times do grid[wy][wx] |= dir if wx != px || wy != py wx += dx wy += dyend

Lastly, we determine the bounds of the subfields, and recurse:

I really had a blast implementing this algorithm. It’s especially hypnotic to watch the demos as they animate its progress. I’d love to see someone implement a mandelbrot-style “infinite zoom” with a recursive-division maze. I’m not sure there’s a lot of practical value there, but it seems like it could have a high degree of “cool”. 🙂

As far as the mazes themselves, it’s readily apparent after generating a few that the mazes will tend to have a lot of short cul-de-sacs, like Kruskal’s and Prim’s, and the nature of the algorithm tends to create long walls that span significant portions of the field, which leads to visual artifacts that (in my opinion) detract from the appearance of the maze. Some of this could be overcome by artfully choosing where to place walls, but that will only take you so far. So, alas, I don’t think I’ll be using this algorithm for anything besides demo animations.

Give it a try yourself, though, and see how your implementation turns out. If you come up with something cool, let me know!

If you recall, the random variant of Kruskal’s algorithm worked by randomly selecting edges from the graph, and adding them to the maze if they connected disjoint trees. Visually, this had the effect of growing the maze from many random points across the grid.

Prim’s approaches the problem from a different angle. Rather than working edgewise across the entire graph, it starts at one point, and grows outward from that point. The standard version of the algorithm works something like this:

Choose an arbitrary vertex from G (the graph), and add it to some (initially empty) set V.

Choose the edge with the smallest weight from G, that connects a vertex in V with another vertex not in V.

Add that edge to the minimal spanning tree, and the edge’s other vertex to V.

Repeat steps 2 and 3 until V includes every vertex in G.

And the result is a minimal spanning tree of G. Straightforward enough!

With one little change, it becomes a suitable method for generating mazes: just change step 2 so that instead of selecting the edge with the smallest weight, you select an edge at random, as long as it bridges the so-called “frontier” of the maze (the set of edges that move from within the maze, to a cell that is outside the maze).

As before, let’s walk through an example and see how it works in practice.

An example

Let’s start with a simple 3×3 grid:

Now, we choose a point at random and add it to the maze:

For efficiency, let’s call the set of all cells that are not yet in the maze, but are adjacent to a cell that is in the maze, the “frontier”. We’ll color the frontier cells red:

Now, we choose one of these frontier cells at random and carve a passage from that frontier cell to whichever adjacent cell is already part of the maze. Then we’ll mark the neighbors of the formerly frontier cell, as “frontier” cells themselves.

Rinse and repeat:

Now, here’s an interesting bit. Look what happens if we (ahem, “randomly”) choose the cell at (1,0) (the top middle). It is adjacent to two cells that are already “in” the maze. The algorithm resolves this by simply choosing one of the “in” neighbors at random. Prim’s doesn’t care which neighbor is picked, only that each frontier cell eventually be connected to a cell already within the maze.

Let’s just keep it going to the end, now, chug chug chug:

The algorithm terminates when there are no more frontier cells to choose from, giving us the anticipated perfect maze.

Implementation

The largest bit of implementing Prim’s algorithm (for me) seems to go toward managing the interactions with that frontier set. Maybe your experience will be different. You basically need two operations: mark a cell as “in” (which then marks the “out” neighbors as frontier cells), and one that returns all the “in” neighbors of a given frontier cell. Something like this: (and please excuse the apparent hand-waving around the add_frontier method; it’s not complicated, just not entirely relevant. The full implementation is given at the end of the article.)

123456789101112131415161718

defmark(x, y, grid, frontier) grid[y][x] |= IN add_frontier(x-1, y, grid, frontier) add_frontier(x+1, y, grid, frontier) add_frontier(x, y-1, grid, frontier) add_frontier(x, y+1, grid, frontier)enddefneighbors(x, y, grid) n = [] n << [x-1, y] if x > 0 && grid[y][x-1] & IN != 0 n << [x+1, y] if x+1 < grid[y].length && grid[y][x+1] & IN != 0 n << [x, y-1] if y > 0 && grid[y-1][x] & IN != 0 n << [x, y+1] if y+1 < grid.length && grid[y+1][x] & IN != 0 nend

Once you’ve got that implemented (along with the necessary supporting methods and data structures), the algorithm itself is remarkably straightforward.

You start by marking a random cell:

1

mark(rand(width), rand(height), grid, frontier)

Then, you simply iterate until the frontier set is empty:

123

until frontier.empty?# ...end

Within the loop, we choose a frontier cell at random:

1

x, y = frontier.delete_at(rand(frontier.length))

Then we choose a random “in” neighbor of that frontier cell:

12

n = neighbors(x, y, grid)nx, ny = n[rand(n.length)]

Then, we record a passage from the neighbor cell to the frontier cell:

123

dir = direction(x, y, nx, ny)grid[y][x] |= dirgrid[ny][nx] |= OPPOSITE[dir]

And finally, we mark the frontier cell as being “in” the maze (and add any of its outside neighbors to the frontier):

1

mark(x, y, grid, frontier)

And you’re done! For those of you not using IE (which will make a total mess of this), here are two demos you can play with to see Prim’s in action:

Mazes generated by Prim’s algorithm share many of the characteristics of those created via Kruskal’s algorithm, such as having an abundance of short cul-de-sacs. Such an aesthetic appeals to some, and not to others, but it definitely has this to say for it: for large mazes, the short cul-de-sacs make the maze harder to puzzle out at a glance!

Whether you enjoy the look of these mazes or not, I encourage you to try your hand at Prim’s algorithm. It’s a fun algorithm to code, not least of all because it comes together so easily. Personally, I enjoy watching the animation: it puts me in mind of a flame consuming a sheet of paper.

Click here to view on the original site: Original Post

For the third article in my series on maze algorithms, I’m going to take a look at Kruskal’s algorithm. (I’ve previously covered recursive backtracking and Eller’s algorithm.)

Kruskal’s algorithm is a method for producing a minimal spanning tree from a weighted graph. The algorithm I’ll cover here is actually a randomized version of Kruskal’s; the original works something like this:

Throw all of the edges in the graph into a big burlap sack. (Or, you know, a set or something.)

Pull out the edge with the lowest weight. If the edge connects two disjoint trees, join the trees. Otherwise, throw that edge away.

Repeat until there are no more edges left.

The randomized algorithm just changes the second step, so that instead of pulling out the edge with the lowest weight, you remove an edge from the bag at random. Making that change, the algorithm now produces a fairly convincing maze.

Let’s walk through an example manually, to see how the process works in practice.

An example

For this example, I’ll use a simple 3×3 grid. I’ve assigned each cell a letter, indicating which set it belongs to.

A

B

C

D

E

F

G

H

I

The algorithm is straightforward: simply select an edge at random, and join the cells it connects if they are not already connected by a path. We can know if the cells are already connected if they are in the same set. So, let’s choose the edge between (2,2) and (2,3). The cells are in different sets, so we join the two into a single set and connect the cells:

A

B

C

D

E

F

G

E

I

Let’s do a few more passes of the algorithm, to get to the interesting part:

A

A

C

D

E

F

G

E

I

A

A

C

D

E

C

G

E

I

A

A

C

D

E

C

G

E

E

A

A

C

D

E

C

E

E

E

Here’s where things start to move fast. Note what happens when the edge between (2,1) and (2,2) is pulled from the bag:

A

A

C

D

A

C

A

A

A

The two trees, A and E, were joined into one set, A, implying that any cell in A is reachable from any other cell in A. Let’s try joining (1,2) and (1,3) now:

A

A

C

A

A

C

A

A

A

Now, consider the edges (1,1)–(1,2) and (1,2)–(2,2). Neither of these has been drawn from the bag yet. What would happen if one of them were? Well, in both cases, the cells on either side of the edge belong to the same set. Connecting the cells in either case would result in a cycle, so we discard the edge and try again.

After a one more pass, we’ll have:

A

A

A

A

A

A

A

A

A

The algorithm finishes when there are no more edges to consider (which, in this case, is when there is only a single set left). And the result is a perfect maze!

Implementation

Implementing Kruskal’s algorithm is straightforward, but for best results you need to find a very efficient way to join sets. If you do it like I illustrated above, assigning a set identifier to each cell, you’ll need to iterate on every merge, which will be expensive. Using trees to represent the sets is much faster, allowing you to merge sets efficiently simply by adding one tree as a subtree of the other. Testing whether two cells share a set is done by comparing the roots of their corresponding trees.

Once you have the tree data structure, the algorithm is extremely straightforward. Begin by initializing the grid (which will represent the maze itself), and the sets (one per cell):

Note that it would probably be more efficient to join the two representations, but I’ve split them apart for clarity.

Then, build the list of edges. Here I’m representing each edge as one of its end-points, and a direction:

1234567

edges = []height.times do |y| width.times do |x| edges << [x, y, N] if y > 0 edges << [x, y, W] if x > 0endend

Once you have the list of edges, just sort them randomly:

edges = edges.sort_by{rand}

The algorithm itself, then, is simply a process of looping until the set of egdes is empty:

123

until edges.empty?# ...end

Within the loop, we remove the next edge from the list, compute the other end point, and test their two sets:

1234567

x, y, direction = edges.popnx, ny = x + DX[direction], y + DY[direction]set1, set2 = sets[y][x], sets[ny][nx]unless set1.connected?(set2)# join the sets and connect the cellsend

The joining and connecting bit is pretty straightforward:

And you’re done! For those of you not using IE (which will make a total mess of this), here are two demos you can play with to see the algorithm in action:

<noscript>kruskals.rb at gist.github.com</noscript>

Conclusion

Kruskal’s is a fun algorithm to implement and watch, but I’m not partial to the style of mazes it generates. It tends to create a lot of short dead-ends, which is (admittedly in my own opinion) not necessarily very esthetically attractive.

One area where Kruskal’s works better than an algorithm like the recursive backtracker, is when you’re dealing with a maze with two or more disjoint areas, like if you were doing a maze that was constrained to the shape of two or more letters. Essentially, this is the same as multiple different mazes, but with Kruskal’s you could do them all at once, since you’re only dealing with edges and not with direct connectivity.

Please give Kruskal’s a try and share your implementation! Look for ways to tweak the algorithm to produce different results. Have fun!

Click here to view on the original site: Original Post

Last time I talked about the recursive backtracker algorithm for maze generation. That’s probably always going to be my favorite algorithm for generating mazes, for a variety of reasons, but that’s not going to stop me from looking at others.

For one thing, there are some pretty crazy algorithms out there for generating mazes.

Eller’s algorithm is one of the craziest. It’s also one of the fastest. And it’s the only one I know that let’s you generate mazes of an infinite size. In linear time.

Yeah, it’s that crazy.

It does this by building the maze one row at a time, using sets to keep track of which columns are ultimately connected. But it never needs to look at more than a single row, and when it finishes, it always produces a perfect maze.

Like I did for the recursive backtracking algorithm, here’s the “mile-high” overview of Eller’s algorithm:

Initialize the cells of the first row to each exist in their own set.

Now, randomly join adjacent cells, but only if they are not in the same set. When joining adjacent cells, merge the cells of both sets into a single set, indicating that all cells in both sets are now connected (there is a path that connects any two cells in the set).

For each set, randomly create vertical connections downward to the next row. Each remaining set must have at least one vertical connection. The cells in the next row thus connected must share the set of the cell above them.

Flesh out the next row by putting any remaining cells into their own sets.

Repeat until the last row is reached.

For the last row, join all adjacent cells that do not share a set, and omit the vertical connections, and you’re done!

If you’re at all like me, your head is probably spinning at this point. Let’s back up and work through an example manually, to help you see how the algorithm works in practice. Let’s begin with a simple 5-column row.

An example

First, we initialize each cell in that row to be in its own set. I’ll just assign each cell a number, indicating the set it belongs to:

Now we randomly determine the vertical connections, at least one per set. The cells in the next row that we connected to must be assigned to the set of the cell above them:

At this point, we can actually discard the first row, because the algorithm is done with it. We’ll keep it around for now, though, for the sake of illustration. I’ll just put a little space between the previous rows, and the current row:

Let’s analyze that a bit. It seemed to come together pretty magically, considering we weren’t looking at anything but the current row (and the next row, briefly). The key to it all are the sets.

The set that a cell belongs to tells the algorithm who its siblings were, are, and will be. It’s the crystal ball that lets the algorithm gaze into the future (and the past!) and avoid adding cycles and isolates to the maze.

Cells that share a set, also share a path between them. (If you don’t believe me, look at the example I just gave, above. Every cell that shares a set identifier is connected; cells in different sets are not connected.)

If the algorithm allowed us to create a passage between two cells that shared a set, it would be introducing a second path between those two cells. That’s essentially the definition of a loop or cycle in the graph, and since we don’t want cycles in our maze, we disallow that.

Conversely, cells that do not share a set, are not connected (they are disjoint). By the time we reach the end of the maze, every cell must be connected to every other cell, and the only way we can do that is if every set is eventually merged into a single set.

We can’t do that if a set does not propogate itself to the next row. This is why the algorithm requires that at least one vertical passage be created for each set in the row. Otherwise, any set that didn’t create a vertical passage would become extinct after the current row. The result would be an isolate, an orphaned collection of cells that could never be reached from outside that set.

Then, at the end, the algorithm joins all disjoint sets, allowing every cell in the the entire maze to be connected by a single, unique path to any other cell in the maze. And we’re done!

Implementation

How would you implement this? The key, for me, turned out to be implementing the sets. You need to be able to quickly determine the set of any given cell in a row, as well as determine the list of cells in any given set. I did this by maintaining a hash of arrays that mapped sets to cells, and another hash that mapped cells to sets. As I did in the example above, I simply used a unique integer to identify each set.

There’s plenty of room for optimization there, though.

Lastly, assigning set ids is done via a #populate method:

1234567891011

defpopulate width.times do |cell|unless@cells[cell] set = @next_set += 1@sets[set] << cell@cells[cell] = setendendselfend

Once I had these routines (encapsulated into a State class), the algorithm itself came fairly neatly. It works in two steps, plus a third to convert the representation into something easier to display.

The first step looks over the row and randomly connects adjacent cells (if they exist in disjoint sets):

123456789101112131415

connected_sets = []connected_set = [0](state.width-1).times do |c|if state.same?(c, c+1) || (!finish && rand(2) > 0)# cells are not joined by a passage, so we start a new connected set connected_sets << connected_set connected_set = [c+1]else state.merge(c, c+1) connected_set << c+1endendconnected_sets << connected_set

As you can see, the process simply looks at each cell and its neighbor, comparing their states and then either adding the cells to a “connected set” (a series of adjacent cells that are all horizontally connected) and merging the sets together, or creating a new connected set when the two cells should not be merged.

The finish variable is used to change the behavior for the final row; it is false for the rest of the rows.

The second step looks at the available sets and randomly adds vertical connections:

State#next just returns a new State object (that we’re using for the next row). Then, for each set of cells, we randomly pick some number of them and add them to the list of verticals we’re going to create. (The verticals are also added to the next row, in the same set.)

The algorithm itself then loops over these steps repeatedly, setting state to next_state at the end of each pass, until it is done. (In my case, I trapped the INT signal, so ctrl-C can be used to terminate the algorithm and gracefully finish the maze.)

For those of you not using IE (which will make a total mess of this), here are two demos you can play with to see the algorithm in action:

I think Eller’s algorithm is harder to customize than the recursive backtracking algorithm, but it can be done. Consider it an exercise, if you want: how would you introduce horizontal or vertical bias into the maze? (I.e., how would you make the maze prefer longer corridors, either horizontally or vertically?) How would you implement weave mazes, where the passages move over or under other passages? Especially tricky: how would you introduce symmetry into the output, given that the algorithm itself doesn’t look at anything more than the single row?

If nothing else, though, please give Eller’s algorithm a shot. Please share your implementations (in any language!) in the comments (links to gist.github.com are preferred).

Click here to view on the original site: Original Post

I’ve said before that generating mazes is a great default project when experimenting with a new programming language. I’ve yet to find a better one (but I’d love to hear recommendations). However, before you can dive into generating a maze (especially in a syntax you are unfamiliar with), you had better have a solid grasp of how the process works.

With mazes, you can take your pick of a solid double-handful of algorithms: recursive backtracking, Prim’s, Kruskal’s, Eller’s, Aldous-Broder or Wilson’s algorithms, recursive division, hunt-and-kill, and more.

My favorite, and the one I implement by default, is recursive backtracking. It is fast, easy to understand, and straightforward to implement. You’ll need sufficient memory to store the entire maze in memory, though, and it requires stack space again proportional to the size of the maze, so for exceptionally large mazes it can be fairly inefficient. But for most mazes, it works a charm.

Here’s the mile-high view of recursive backtracking:

Choose a starting point in the field.

Randomly choose a wall at that point and carve a passage through to the adjacent cell, but only if the adjacent cell has not been visited yet. This becomes the new current cell.

If all adjacent cells have been visited, back up to the last cell that has uncarved walls and repeat.

The algorithm ends when the process has backed all the way up to the starting point.

I generally implement the field as a grid of bitfields (where the bits in each cell describe which direction passages have been carved). That’s probably just the C programmer in me asserting dominance, though; feel free to experiment with other representations.

In Ruby, I usually initialize the grid like so:

1

grid = Array.new(height) { Array.new(width, 0) }

Next, the algorithm itself is kicked off by calling the worker function:

This begins carving passages in the grid, starting at the upper-left corner, (0,0). And as you might have guessed from the algorithm’s name, this works recursively, as we’ll see next.

The carve_passages_from method first creates a list of directions that ought to be tried from the current cell:

1

directions = [N, S, E, W].sort_by{rand}

We sort the list in random order, so that the path will meander, rather than having a bias in any particular direction.

Then, the function iterates over each of those directions, determining the coordinates of the cell in that direction and deciding if the cell is valid or not. Note that a cell is valid only if it lies within the bounds of the maze, AND it has not previously been visited: we only want to carve passages into untouched cells, to avoid creating circular loops in the maze.

12345678

directions.each do |direction| nx, ny = cx + DX[direction], cy + DY[direction]if ny.between?(0, grid.length-1) && nx.between?(0, grid[ny].length-1) && grid[ny][nx] == 0# cell is valid!# ...endend

Finally, if the cell is valid, we carve a passage out of the current cell and into the next cell. Then, we recursively call carve_passages_from on the new cell:

For all of you not using IE (and I honestly hope no reader of my blog uses IE), here’s a Javascript demo you can play with to see the algorithm in action:

Start by writing your own implementation in a language you’re comfortable in, just to wrap your mind around the algorithm. Try replacing the recursion with iteration (always a fun exercise). Consider extending it to include weave mazes (where passages move over or under other passages), or braided mazes (mazes where deadends are removed to create loops in the maze), or symmetrical mazes, or wrapped mazes. Or even differentcelltesselations. If you’re at all like me, you may find that your “toy” project has taken on a life of its own, and you’re suddenly researching new and exciting ways to build mazes!

Once you understand the algorithm, though, the real fun is trying it in a language you’re unfamiliar with. It’ll show you conditionals, bit manipulation (if you use the bitfield storage like I showed above), iteration, recursion, and console output (if you decide to render your maze, too). When you’ve finished, you’ll have a good idea of how the language looks and works in practice, and not just for trivial “hello world” apps.

Give it a try! Please share your implementations in the comments (links to gist.github.com are preferred).