Planet Haskell

Updated Monday, 05 January 2015 07:00
{}
FP Complete News ( Feed )
Sunday, 04 January 2015
Announcing LTS Haskell 1.0

The Stackage team is happy to announce the first official LTS Haskell release, LTS Haskell 1.0. The LTS Haskell Github repository has a good overview of the project, and our initial blog post provides quite a bit more detail. To quote:

LTS Haskell: Version your Ecosystem

LT

The Stackage team is happy to announce the first official LTS Haskell release, LTS Haskell 1.0. The LTS Haskell Github repository has a good overview of the project, and our initial blog post provides quite a bit more detail. To quote:

LTS Haskell: Version your Ecosystem

LTS Haskell is a curated set of packages which includes non-breaking point releases. It is a companion to Stackage Nightly: whereas Stackage Nightly releases include potentially breaking changes with each new release, LTS Haskell maintains major version stability for a longer period of time.

The plan for LTS 1 is:

  • Run weekly point releases. The first point release, LTS Haskell 1.1, will be run on Sunday, January 11.
  • Continue running these releases until LTS 2 is released, scheduled for April 1.
  • Like all Stackage snapshots, each release will be available for usage in perpetuity going forward. The only timeline is how far into the future we will continue generating patch releases.

As discussed previously, we are aware that three months is not really "long term support." The reason for the short terms here is so that we can shake out the process well for this first release. If things go exceptionally well and there is community desire to change the planned dates, we will certainly discuss it.

If you want the incredibly short guide to using this release, just run the following in your project:

wget http://www.stackage.org/lts/1.0/cabal.config

In the future, when you want to upgrade to a new point release, just delete your old cabal.config and download the latest point release, e.g.:

wget http://www.stackage.org/lts/1/cabal.config

Your code should continue compiling against the new snapshot without any code changes, barring typical caveats of the PVP (unqualified import lists, mistakes by authors, etc).

Together with this release, we're also announcing some improvements to Stackage Server around hosted documentation to make LTS Haskell more useful.

Full module listing

The main page for LTS Haskell 1.0 contains a listing of all packages, their version numbers, and- where available- a link to documentation. (Documentation may not be available if the package only contains executables, or if there is a bug in the Haddock markup in the package.)

We've now added one more feature: a full module listing. This allows you to view all available modules and easily determine which package provides a module.

If you are using LTS Haskell for your projects, I strongly encourage you to use the stackage.org hosted documentation when looking things up, for one simple reason: you're guaranteed that links to other packages will include versions that are also part of the LTS Haskell release set. (The same logic applies to Stackage Nightly releases.)

All of this adds up to a hopefully more pleasant way of thinking about the Haskell library ecosystem: instead of a huge number of individual packages whose individual modules and changesets you need to keep in your head, relying on an LTS Haskell release reduces it all to a single version number for "my Haskell ecosystem," with a massive number of modules at your disposal to write your applications.

Experimental Hoogle support

FP Complete has hosted a Hoogle service for a few years now. This Hoogle provides access to the full FP Haskell Center library set. I personally use this service on an almost daily basis, as I know many others do as well. However, the FP Haskell Center package sets are not completely in sync (yet) with LTS Haskell releases, and therefore the documentation can be slightly out of date.

So to address that, we've added Hoogle search support to stackage.org as well. You can go to any snapshot page- including LTS Haskell 1.0- and perform a Hoogle search. This search will encompass the entirety of the package set. As usual, a big thank you goes to Neil for providing such an awesome tool to the community.

And since I brought it up: FP Complete does intend to synchronize the FP Haskell Center library set with LTS Haskell releases. That will take a little bit longer, but we'll make an announcement when it's ready.

Dive in!

LTS Haskell is out. I'm already using it, and given the minor delta from how Stackage Nightlies work, I believe it should present no hurdles for greater adoption. The community process of dealing with patch releases will be an interesting adventure that I'm looking forward to. Please give this a spin, provide feedback, and let's make a solid, stable basis for doing Haskell development!

{}
Well-Typed (Haskell Consultants) ( Feed )
Wednesday, 31 December 2014
Simple SMT solver for use in an optimizing compiler
This is a second blog post in a series about engineering optimizing compilers; the previous was Quasi-quoting DSLs for free. In this blog we will see how we might write a very simple SMT solver that will allow us to write an optimization pass that can turn something like ``` {.Haskell} if a == 0 the

This is a second blog post in a series about engineering optimizing compilers; the previous was Quasi-quoting DSLs for free. In this blog we will see how we might write a very simple SMT solver that will allow us to write an optimization pass that can turn something like

if a == 0 then
  if !(a == 0) && b == 1 then
    write 1
  else
    write 2
else
  write 3

into

if a == 0 then
  write 2
else
  write 3

without much effort at all.

Preliminaries

For the sake of this blog post we will consider a very simple imperative object language, defined as

data Expr =
    -- Arithmetic expressions
    ExprInt Integer           -- ^ Integer constants
  | ExprAdd Expr Expr         -- ^ Addition
    -- Boolean expressions
  | ExprBool Bool             -- ^ Boolean constants
  | ExprEq Expr Expr          -- ^ Equality
  | ExprNot Expr              -- ^ Negation
  | ExprAnd Expr Expr         -- ^ Logical conjunction
  | ExprOr Expr Expr          -- ^ Logical disjunction
    -- Variables
  | ExprVar VarName           -- ^ Read from a variable
  | ExprAssign VarName Expr   -- ^ Write to a variable
    -- Control flow
  | ExprSeq Expr Expr         -- ^ Sequential composition
  | ExprIf Expr Expr Expr     -- ^ Conditional
    -- Input/output
  | ExprRead                  -- ^ Read an integer from the console
  | ExprWrite Expr            -- ^ Write an integer to the console

We will assume the existence of a quasi-quoter for this language so that we can write Haskell fragments such as

[expr| if a == 0 then read else write b |]

instead of

ExprIf (ExprEq (ExprVar a) (ExprInt 0)) 
       ExprRead 
       (ExprWrite (ExprVar b))

How you can write such a quasi-quoter was the topic of the previous blog post. You should however be able to read this blog post without having read the previous post; hopefully the mapping from the concrete syntax to the constructors of Expr is pretty obvious.

Simplifying assumption

Our goal will be to write a function

provable :: Expr -> Bool

Let’s consider some examples:

  • The expression a || True should be provable: no matter what value variable a has, a || True is always True.

  • Likewise, the expression a || !a should be provable, for the same reason.

  • However, expression a should not be provable (a might be False)

  • Likewise, expression !a should also not be provable (a might be True)

Note that this means that it’s perfectly possible for both an expression and the negation of that expression to be unprovable.

What about an expression !(n == 0 && n == 1)? Morally speaking, this expression should be provable. However, we will be making the following very important simplifying assumption:

provable is allowed to give up: when provable returns False, this should be interpreted as “failed to prove”, rather than “there exist a counterexample”.

From a compiler perspective, if something is not statically provable, that simply means that an optimization may not be applied even though it could: that is, we miss an opportunity to make the program go faster, but we will not break the code.

An evaluator

We don’t want (or need) to embed a full blown theorem prover into our compiler: ideally we write something simple that will still catch a lot of the common cases. Moreover, we would prefer to reuse as much of our existing compiler infrastructure as we can. Our optimizer is likely to contain an interpreter for the object language, so that we can attempt to statically evaluate expressions. We are going to adapt this interpreter so that we can also use it in our solver. In fact, as we shall see, the solver will be a one-liner.

But we’re getting ahead of ourselves. Let’s consider how to write the interpreter first. The interpreter will be running in an Eval monad; for our first attempt, let’s define this monad as a a simple wrapper around the list monad:

newtype Eval a = Eval { unEval :: [a] }
  deriving ( Functor
           , Applicative
           , Alternative
           , Monad
           , MonadPlus
           )

runEval :: Eval a -> [a]
runEval act = unEval act

We will use the monad for failure:

throwError :: Eval a
throwError = Eval []

We can provide error recovery through

onError :: Eval a -> Eval a -> Eval a
onError (Eval act) (Eval handler) = Eval $
    case act of
      [] -> handler
      rs -> rs

We will see why we need the list monad when we discuss the evaluator for boolean expressions; but let’s consider the evaluator for integer expressions first:

evalInt :: Expr -> Eval Integer
evalInt = go
  where
    go (ExprInt i)    = return i
    go (ExprAdd a b)  = (+) <$> evalInt a <*> evalInt b
    go (ExprIf c a b) = do cond <- evalBool c
                           evalInt (if cond then a else b)
    go _              = throwError 

Hopefully this should be pretty self explanatory; our toy language only has a few integer-valued expressions, so there isn’t much to do. The interpreter for boolean expressions is more interesting:

evalBool :: Expr -> Eval Bool
evalBool = \e -> go e `onError` guess e
  where
    go (ExprBool a)   = return a
    go (ExprEq   a b) = (==) <$> evalInt a <*> evalInt b
    go (ExprNot  a)   = not  <$> evalBool a
    go (ExprAnd  a b) = (&&) <$> evalBool a <*> evalBool b
    go (ExprOr   a b) = (||) <$> evalBool a <*> evalBool b
    go (ExprIf c a b) = do cond <- evalBool c
                           evalBool (if cond then a else b)
    go _              = throwError 

    guess _e = return True <|> return False

The definition of go contains no surprises, and follows the definition of go in evalInt very closely. However, the top-level definition

evalBool = \e -> eval e `onError` guess e

is more interesting. If for some reason we fail to evaluate a boolean expression (for example, because it contains a variable) then guess returns both True and False. Let’s consider some examples:

runEval $ evalBool [expr| True |]

evalutes to [True];

runEval $ evalBool [expr| a |]

evaluates to [True, False] because we don’t know what value a has, but

runEval $ evalBool [expr| a || True |]

evaluates to [True, True]: we still have two guesses for a, but no matter what we guess a || True always evaluates to True.

Satisfiability

We can now write our SMT solver; as promised, it will be a single line:

satisfiable :: Expr -> Bool
satisfiable = or . runEval . evalBool

Function satisfiable (the “S” in SMT) checks if the expression is true for some values of the variables in the expression. How do we check this? Well, we just run our evaluator which, when it encounters a variable, will return both values for the variable. Hence, if any of the values returned by the evaluator is True, then the expression is true at least for one value for each variable.

Once we have an implementation of satisfiability, we can implement provable very easily: an expression is provable if its negation is not satisfiable:

provable :: Expr -> Bool
provable = not . satisfiable . ExprNot

If we consider the three examples from the previous section, we will find that both True and a || True are provable, but a by itself is not, as expected.

Inconsistent guesses

The careful reader might at this point find his brow furrowed, because something is amiss:

runEval $ evalBool [expr| a || !a |]

evaluates to

[True, True, False, True]

This happens because the evaluator will make two separate independent guesses about the value of a. As a consequence, a || !a will be considered not provable.

Is this a bug? No, it’s not. Recall that we made the simplifying assumption that provable is allowed to give up: it’s allowed to say that an expression is not provable even when it morally is. The corresponding (dual) simplifying assumption for satisfability, and hence the interpreter, is:

The interpreter, and hence satisfiability, is allowed to make inconsistent guesses.

Making an inconsistent guess is equivalent to assuming False: anything follows from False and hence any expression will be considered satisfiable once we make an inconsistent guess. As a consequence, this means that once we make inconsistent guesses, we will consider the expression as not provable.

More precision

We can however do better: whenever we make a guess that a particular expression evaluates to True or False, then if we see that same expression again we can safely make the same guess, rather than making an independent guess. To implement this, we extend our Eval monad with some state to remember the guesses we made so far:

newtype Eval a = Eval { unEval :: StateT [(Expr, Bool)] [] a }
  deriving ( Functor
           , Applicative
           , Alternative
           , Monad
           , MonadPlus
           , MonadState [(Expr, Bool)]
           )
           
runEval :: Eval a -> [a]
runEval act = evalStateT (unEval act) []

throwError :: Eval a
throwError = Eval $ StateT $ \_st -> []

onError :: Eval a -> Eval a -> Eval a
onError (Eval act) (Eval handler) = Eval $ StateT $ \st ->
    case runStateT act st of
      [] -> runStateT handler st
      rs -> rs

The implementation of evalInt does not change at all. The implementation of evalBool also stays almost the same; the only change is the definition of guess:

guess e = do
  st <- get
  case lookup e st of
    Just b  -> return b
    Nothing -> (do put ((e, True)  : st) ; return True)
           <|> (do put ((e, False) : st) ; return False)

This implements the logic we just described: when we guess the value for an expression e, we first look up if we already made a guess for this expression. If so, we return the previous guess; otherwise, we make a guess and record that guess.

Once we make this change

runEval $ evalBool [expr| a || !a |]

will evaluate to [True,True] and consequently a || !a will be considered provable.

Example: folding nested conditionals

As an example of how one might use this infrastructure, we will consider a simple pass that simplifies nested conditionals (if-statements). We can use provable to check if one expression implies the other:

(~>) :: Expr -> Expr -> Bool
(~>) a b = provable [expr| ! $a || $b |]

The simplifier is now very easy to write:

simplifyNestedIf :: Expr -> Expr
simplifyNestedIf = everywhere (mkT go)
  where
    go [expr| if $c1 then
                 if $c2 then
                   $e1
                 else
                   $e2
               else
                 $e3 |]
      | c1 ~> c2              = [expr| if $c1 then $e1 else $e3 |]
      | c1 ~> [expr| ! $c2 |] = [expr| if $c1 then $e2 else $e3 |]
    go e = e

The auxiliary function go pattern matches against any if-statement that has another if-statement as its “then” branch. If we can prove that the condition of the outer if-statement implies the condition of the inner if-statement, we can collapse the inner if-statement to just its “then” branch. Similarly, if we can prove that the condition of the outer if-statement implies the negation of the condition of the inner if-statement, we can collapse the inner if-statement to just its “else” branch. In all other cases, we return the expression unchanged. Finally, we use the Scrap Your Boilerplate operators everywhere and mkT to apply this transformation everywhere in an expression (rather than just applying it top-level). For example,

simplifyNestedIf [expr| 
    if a == 0 then 
      if !(a == 0) && b == 1 then 
        write 1 
      else 
        write 2 
    else 
      write 3 
  |]

evaluates to

if a == 0 then
  write 2
else
  write 3

as promised. Incidentally, perhaps you are wondering why such an optimization would be useful; after all, which self respecting programmer writes such code? However, code like the above may result from other compiler optimizations such as inlining: imagine that the inner if-statement came from a inlined function. A lot of compiler optimizations are designed to clean up other compiler optimizations.

Conclusion

We can implement a very simple but useful SMT solver for use in an optimizing compiler by making a small change to the interpreter, which we need anyway.

Note that the increase in precision gained from recording previous guesses is a gradual increase only; satisfiability may still make some inconsistent guesses. For example

runEval $ evalBool [expr| !(n == 0 && n == 1) |]

will evaluate to

[False,True,True,True]

because it is making independent guesses about n == 0 and n == 1; consequently !(n == 0 && n == 1) will not be considered provable. We can increase the precision of our solver by making guess smarter (the “MT” or “modulo theories” part of SMT). For example, we could record some information about the guesses about integer valued variables.

At the extreme end of the scale we would be implementing a full decision procedure for first order arithmetic, which is probably optimization gone too far. However, the approach we took above where we start from the basis that we are allowed to make inconsistent guesses gives us a way to implement a simple solver that addresses the most important cases, even if it misses a few—without requiring a lot of complicated infrastructure in the compiler. And as long as we make sure that our evaluator never returns too few values (even if it may return too many) we don’t have to worry that we might generate invalid code: the worst that can happen is that we miss an optimization.

{}
FP Complete News ( Feed )
Wednesday, 24 December 2014
GHC 7.10RC1 Stackage build results

As many of you likely saw recently, GHC 7.10.1 release candidate 1 was just released. As usually occurs with this process, there is currently lots of breakage in the Haskell library ecosystem. Herbert asked me today if I had plans to throw Stackage at the GHC 7.10 release candidate.

I've

As many of you likely saw recently, GHC 7.10.1 release candidate 1 was just released. As usually occurs with this process, there is currently lots of breakage in the Haskell library ecosystem. Herbert asked me today if I had plans to throw Stackage at the GHC 7.10 release candidate.

I've used Stackage in the past to be a catalyst to test out packages before release. Until now, I've needed to wait until people relax their upper bounds on Hackage to do a proper test. However, it would be useful to get more information up front. With the newly overhauled Stackage build system, this turned out to be easy: just skip the bounds checks, and pass --allow-newer to cabal configure.

I've already opened an issue about restrictive upper bounds preventing GHC 7.10 packages, namely Cabal, base, bytestring, deepseq, ghc, template-haskell, time and transformers. The rest of this blog post will cover issues I ran into while running the actual build, ignoring version bound constraints.

haddock --hoogle is broken

This is easily reproducible. Unpack a package (BoundedChan in this case) and run:

$ cabal haddock --hoogle
Running Haddock for BoundedChan-1.0.3.0...
Preprocessing library BoundedChan-1.0.3.0...
Haddock coverage:
 100% ( 10 / 10) in 'Control.Concurrent.BoundedChan'
haddock: internal error: expectJust getPackageDetails

After this error message, I disabled --hoogle. I've already opened a Trac ticket for this.

Build and test errors

I turned off reporting of packages that failed due to a build failure in one of their dependencies, since at this point it's simply noise. With those filtered out, the following are the primary build and test failures:

  • MaybeT: BuildFailureException Process exited with ExitFailure 1: cabal build
  • MonadRandom: BuildFailureException Process exited with ExitFailure 1: cabal build
  • Octree: BuildFailureException Process exited with ExitFailure 1: cabal build
  • Yampa: BuildFailureException Process exited with ExitFailure 1: cabal build
  • alex: BuildFailureException Process exited with ExitFailure 1: cabal build
  • arithmoi: BuildFailureException Process exited with ExitFailure 1: cabal build
  • binary-list: BuildFailureException Process exited with ExitFailure 1: cabal build
  • bytestring-trie: BuildFailureException Process exited with ExitFailure 1: cabal build
  • bzlib: BuildFailureException Process exited with ExitFailure 1: cabal build
  • control-monad-free: BuildFailureException Process exited with ExitFailure 1: cabal build
  • crypto-numbers: BuildFailureException Process exited with ExitFailure 1: cabal build
  • djinn-ghc: BuildFailureException Process exited with ExitFailure 1: cabal build
  • doctest: BuildFailureException Process exited with ExitFailure 1: cabal build
  • fclabels: BuildFailureException Process exited with ExitFailure 1: cabal build
  • fgl: BuildFailureException Process exited with ExitFailure 1: cabal build
  • file-location: BuildFailureException Process exited with ExitFailure 1: cabal build
  • filemanip: BuildFailureException Process exited with ExitFailure 1: cabal build
  • fixed-list: BuildFailureException Process exited with ExitFailure 1: cabal build
  • ghc-syb-utils: BuildFailureException Process exited with ExitFailure 1: cabal build
  • haddock-library: BuildFailureException Process exited with ExitFailure 1: cabal build
  • happy: BuildFailureException Process exited with ExitFailure 1: cabal build
  • haskell-src: BuildFailureException Process exited with ExitFailure 1: cabal build
  • hdevtools: BuildFailureException Process exited with ExitFailure 1: cabal build
  • heaps: BuildFailureException Process exited with ExitFailure 1: cabal build
  • hint: BuildFailureException Process exited with ExitFailure 1: cabal build
  • histogram-fill: BuildFailureException Process exited with ExitFailure 1: cabal build
  • hslogger: BuildFailureException Process exited with ExitFailure 1: cabal build
  • hspec-expectations: BuildFailureException Process exited with ExitFailure 1: cabal configure --enable-tests --allow-newer --package-db=clear --package-db=global --package-db=/home
  • /ubuntu/haskell/stackage/builds/stackage-nightly-2014-12-24/pkgdb --libdir=/home/ubuntu/haskell/stackage/builds/stackage-nightly-2014-12-24/lib --bindir=/home/ubuntu/haskell/stack
  • age/builds/stackage-nightly-2014-12-24/bin --datadir=/home/ubuntu/haskell/stackage/builds/stackage-nightly-2014-12-24/share --docdir=/home/ubuntu/haskell/stackage/builds/stackage-
  • nightly-2014-12-24/doc --flags=blazehtml05 -bytestring-in-base https network-uri new-base old-locale smallbase splitbase -test-hlint
  • hybrid-vectors: BuildFailureException Process exited with ExitFailure 1: cabal build
  • kure: BuildFailureException Process exited with ExitFailure 1: cabal build
  • lca: BuildFailureException Process exited with ExitFailure 1: cabal build
  • lhs2tex: BuildFailureException Process exited with ExitFailure 1: cabal configure --allow-newer --package-db=clear --package-db=global --package-db=/home/ubuntu/haskell/stackage/b
  • uilds/stackage-nightly-2014-12-24/pkgdb --libdir=/home/ubuntu/haskell/stackage/builds/stackage-nightly-2014-12-24/lib --bindir=/home/ubuntu/haskell/stackage/builds/stackage-nightl
  • y-2014-12-24/bin --datadir=/home/ubuntu/haskell/stackage/builds/stackage-nightly-2014-12-24/share --docdir=/home/ubuntu/haskell/stackage/builds/stackage-nightly-2014-12-24/doc --f
  • lags=blazehtml05 -bytestring-in-base https network-uri new-base old-locale smallbase splitbase -test-hlint
  • libgit: BuildFailureException Process exited with ExitFailure 1: cabal build
  • list-t: BuildFailureException Process exited with ExitFailure 1: cabal build
  • logfloat: BuildFailureException Process exited with ExitFailure 1: cabal build
  • matrix: BuildFailureException Process exited with ExitFailure 1: cabal build
  • mtl-prelude: BuildFailureException Process exited with ExitFailure 1: cabal build
  • mtlparse: BuildFailureException Process exited with ExitFailure 1: cabal build
  • mysql: BuildFailureException Process exited with ExitFailure 1: cabal build
  • nanospec: BuildFailureException Process exited with ExitFailure 1: cabal build
  • options: BuildFailureException Process exited with ExitFailure 1: cabal build
  • pqueue: BuildFailureException Process exited with ExitFailure 1: cabal build
  • rank1dynamic: BuildFailureException Process exited with ExitFailure 1: cabal build
  • repa: BuildFailureException Process exited with ExitFailure 1: cabal build
  • speculation: BuildFailureException Process exited with ExitFailure 1: cabal build
  • storable-complex: BuildFailureException Process exited with ExitFailure 1: cabal build
  • stringsearch: BuildFailureException Process exited with ExitFailure 1: cabal build
  • syb: BuildFailureException Process exited with ExitFailure 1: cabal build
  • syb-with-class: BuildFailureException Process exited with ExitFailure 1: cabal build
  • tar: BuildFailureException Process exited with ExitFailure 1: cabal build
  • text: BuildFailureException Process exited with ExitFailure 1: cabal build
  • th-desugar: BuildFailureException Process exited with ExitFailure 1: cabal build
  • th-expand-syns: BuildFailureException Process exited with ExitFailure 1: cabal build
  • transformers-compat: BuildFailureException Process exited with ExitFailure 1: cabal build
  • udbus: BuildFailureException Process exited with ExitFailure 1: cabal build
  • word8: BuildFailureException Process exited with ExitFailure 1: cabal test --log=/home/ubuntu/haskell/stackage/logs/stackage-nightly-2014-12-24/word8-0.1.1/test-run.out

Altogether, 355 packages attempted builds, with 62 failures. Stackage normally builds about 820 packages, so approximately 465 packages depend on a package which failed to build.

I've uploaded a tarball with all of the build and test logs included. If there are any questions about the results, let me know. I've saved all changes for this build system to the ghc7.10 branch of the stackage repo, so I'll be able to easily run this analysis again in the future.

{}
Yesod Web Framework News ( Feed )
Sunday, 21 December 2014
Hackage docs live again, please do the READMEs!

After sufficient complaining ensued on Reddit after my previous blog post, and enough claims of "this is so trivial to implement," I decided to bite the bullet and just implement it. After debugging a lens bash script, I reimplemented it in pure Haskell and added it to my mega repo tool

After sufficient complaining ensued on Reddit after my previous blog post, and enough claims of "this is so trivial to implement," I decided to bite the bullet and just implement it. After debugging a lens bash script, I reimplemented it in pure Haskell and added it to my mega repo tool chain to be part of my normal release process.

So for everyone who thinks that Hackage is the right place to have documentation: it's there again. I'm still quite unsatisfied with the whole situation, and have wasted yet another hour of my life on what in my mind is meaningless busywork. But so be it, it's frankly easier to solve a technical problem than put up with the complaints. (I've certainly spent more than an hour in the past four years explaining to people multiple times why docs on Hackage don't exist for my packages.)

Now I'll put out a request: will someone please implement READMEs on Hackage? Yes, I could do this, but I've never touched the code base, and have lots of other things to do. The current situation of having to describe our packages at least three times (synopsis, description, and README) is annoying, and the fact that some forms of documentation cannot be expressed at all in description fields is very problematic (see linked mailing list thread in that issue).

I hope I haven't just set a precedent that complaining at me gets me to do work I don't like...

{}
Yesod Web Framework News ( Feed )
Saturday, 20 December 2014
Use Stackage for docs

I'm writing this blog post to address a personal annoyance of mine as the maintainer of a large number of Haskell packages. Very often, I get bug reports about lack of documentation on Hackage. This has occurred for years. Most people who file these issues are not aware of the fact that lack o

I'm writing this blog post to address a personal annoyance of mine as the maintainer of a large number of Haskell packages. Very often, I get bug reports about lack of documentation on Hackage. This has occurred for years. Most people who file these issues are not aware of the fact that lack of documentation error is more often than not a problem with Hackage. Some people are aware of this, and are asking me to start running a separate tool every time I upload a package to generate the documentation locally.

I have another annoyance with documentation on Hackage: I'm forced to write my package's description in a very strange Haddock-inside-cabal format in the cabal file itself. I need to write a description in any event in a README for users on Github, so this is purely wasted efforted.

To address both of these issues at the same time, I've started modifying the description field in my package's to give a link to their Stackage address. I'm doing this out of laziness on my part: I can now feel confident that documentation will be available at pages where people will be pointed to, I will hopefully get less needless issues opened about "Hackage documentation is broken," and I don't need to keep two meaningful descriptions of all of my packages written (one in the weird cabal/Haddock format, one in much nicer Markdown).

Others are clearly welcome to do this as well, but my main motivation here is explaining my reasoning for these changes, so I don't get a flood of new inquiries as to "why do you have such a strange description field in all your packages?" For those wishing to emulate this, follow these steps:

  1. Make sure your package has a good README or README.md file with a description.
  2. Include the README or README.md file in the extra-source-files field of your cabal file.
  3. Update your description field with text like: "Hackage documentation generation is not reliable. For up to date documentation, please see: http://www.stackage.org/package/*name*."

UPDATE See my next blog post for aftermath of this.

{}
Glasgow Haskell Compiler News ( Feed )
Tuesday, 16 December 2014
GHC Weekly News - 2014/12/16

Hi *, time for another piece of the GHC weekly news!

  • Joachim Breitner has gotten the new GHC 7.8.4 package to tentatively build on ARM quite easily for Debian. Austin also took the liberty of merging all the needed patches; they'll be part of the 7.8.4 release ​www.haskell.org/piperm

Hi *, time for another piece of the GHC weekly news!

  • For the past few days, Richard Eisenberg has been hunting a performance regression in the compiler. After profiling, discussion on IRC and elsewhere, Richard has finally made some headway, and discovered one of the 'hot spots' in his patch. Unfortunately the battle isn't quite over just yet, and the hunt for a few more % increase remains. https://www.haskell.org/pipermail/ghc-devs/2014-December/007645.html

Finally, in a slight change, we'll also be covering some notes from this week's meeting between GHC HQ (Austin, Simon PJ, SimonM, Herbert and Mikolaj), including...

  • The 7.10 RC1 looks like it's scheduled to occur this week still; all of our libraries and submodules are up-to-date, and we've taken the time to alert all of our maintainers about this. Thanks to Herbert for taking control of this!
  • We'll soon be implementing a new 'push hook' for the ghc.git repository: no more trailing whitespace. Since we've recently detabbed, and de-lhs-ified the tree, a knock-on effect was deleting trailing whitespace. Now that we've done a lot of this, we should take the time to enforce it - so they can't slip back in.
  • This week, Austin managed to secure two sponsors for GHC/Haskell.org. We've been given a wonderful set of ARM buildbots (running in the cloud!) and a new, incredibly powerful POWER8 machine to use (with over 170 hardware threads available, for scalability testing). Hooray for our friends at Online.net and RunAbove.com for helping us out!

Closed tickets this week include: #9871, #9808, #9870, #9605, #9874, #9872, #9090, #9404, #8240, #9567, #9566, #9583, #9117, #9882, #9884, #9372, #7942, #8951, #9817, #9620, #9336, #9523, #9552, #8988, #9390, #9415, #9371, #7143, #9563, #8778, #4428, #4896, #9393, #9169, #7015, #8943, #8621, #9132, #9857, #8024, #9831, and #9888.

Minimal Yesod multifile site

I was talking with a coworker yesterday who was uncertain of how to start on a Yesod application without using the scaffolding. A single-file application is easy, and is well documented. The question is: how do you make a trivial multi-file site, where the handler functions can live in individual

I was talking with a coworker yesterday who was uncertain of how to start on a Yesod application without using the scaffolding. A single-file application is easy, and is well documented. The question is: how do you make a trivial multi-file site, where the handler functions can live in individual modules and still use type-safe URLs?

The problem in achieving this is separating out the data type creation from the dispatch generation. This is also discussed in the book, but what seems to be lacking is a simple cookbook-style example. So here it is!

This example serves a very minimal site (actually, it even has a JSON service), but can easily be extended to support more complex things. This is a good basis for people looking to implement minimal services without the need for things in the scaffolding like authentication, configurable logging, etc.

Also, if anyone's interested, it would be possible to add this as one of the scaffolding options from yesod init.

{}
FP Complete News ( Feed )
Monday, 15 December 2014
Stackage: Survey results, easier usage, and LTS Haskell 0.X

There was tremendous response to our Stackage survey, so I'd like to say: thank you everyone who participated, the feedback was invaluable. Additionally, in the past two weeks, I think we've added around 100 new packages to Stackage based on everyone's pull requests, so again, thank yo

There was tremendous response to our Stackage survey, so I'd like to say: thank you everyone who participated, the feedback was invaluable. Additionally, in the past two weeks, I think we've added around 100 new packages to Stackage based on everyone's pull requests, so again, thank you for everyone who got involved. You can view the survey results yourself. Of particular interest to me were the freeform responses, which gave us a lot of valuable information.

Chris Done and I went over the results together, and by far, the strongest impression that we got was that the Stackage setup process was too onerous. Lack of direct cabal-install support and need to choose from among six possible snapshots were very problematic. Furthermore, some people found the homepage confusing, and didn't understand from it why they should use Stackage. There was also fear that, by using Stackage, you'd end up missing out on some important packages, either because those packages aren't present, or because it's unclear how to add new packages.

So today, we're announcing a number of changes on stackage.org to address these concerns, as well as to pave the way for the upcoming LTS Haskell release. These changes are still a work in process, so please give us feedback (or feel free to send a pull request as well).

Simplified choices

In order to use Stackage, you first had to choose GHC 7.8, GHC 7.8 + Haskell Platform, or GHC 7.6. You then had to decide if you wanted exclusive vs inclusive. Once we add LTS Haskell, that's another choice to add to the mix. So we've decided to simplify the options advertised on the homepage to two:

Each of these will be compatible with only one version of GHC (7.8.3 for now). Another piece of feedback is that users are by far using inclusive more commonly than exclusive. So we're going to default to giving inclusive instructions.

One important thing to keep in mind is that this will not affect current users at all. All snapshots currently hosted on stackage.org will remain there in perpetuity. We're talking about discovery for new users here.

Simplified setup

Until now, we've recommended setting up Stackage by changing your remote-repo setting to point to the appropriate stackage.org URL. In October, Greg Weber came up with the idea of generating a cabal.config file to specify constraints instead of using a manual URL. We've decided to make this the preferred Stackage setup method. This provides a number of benefits for you immediately:

  • Works directly with cabal, without needing a bleeding-edge version with remote-repo support
  • It's fully supported by cabal sandboxes
  • It's easy to tweak your version requirements if desired
  • You keep Hackage server as your package source, which may be desired by some

The downsides with this are:

  • There are a few bugs in cabal-install around cabal.config support, see the issue for more information
  • This approach only works for "inclusive"-style snapshots. However, as we're now recommending inclusive as the default mechanism, this makes sense. The cabal.config file contains an optional remote-repo line which you can uncomment to get back exclusive-style snapshots.
  • There are some concerns about Hackage server's reliability. If you'd like to have a more reliable server, FP Complete offers hackage.fpcomplete.com as an alternative remote-repo, hosted by Amazon S3.

As a result of this change, getting set up with Stackage is now as easy as downloading a cabal.config file, placing it in your project directory, and running cabal install. Our homepage has easy to use instructions for this as well.

More focused homepage

Speaking of the homepage: we've updated it to:

  • Give easy-to-use installation instructions
  • Give a clear, concise explanation of what Stackage is and the problems it solves
  • Provide a link for installation instructions, for people without a Haskell toolchain installed
  • Provide information on how to deal with a package not included in Stackage
  • Provide a link for package authors to get involved with Stackage

Relevant Github issue with more details

More informative snapshot pages

The snapshot pages now list all packages in the snapshot, together with version numbers, synopses, and documentation links (if available). The setup instructions are also much simpler on each snapshot page.

We've also set up nicer URLs for the commonly used snapshots. In particular:

  • /nightly will take you to the latest nightly
  • /nightly/2014-12-15 will take you to the December 15, 2014 nightly
  • /lts will take you to the latest LTS
  • /lts/1 will take you to the latest LTS in the 1 series
  • /lts/1.3 will take you to LTS 1.3

Relevant Github issue with more details

More informative package pages

We've streamlined the package pages to provide the most pertinent information. Have a look for yourself. Of particular interest, we now have inline links for Haddock documentation. You can now very easily start browsing docs from just looking at a package page.

Relevant Github issue with more details

New installation instructions

There's now a dedicated installation instructions page targeted at people without a Haskell installation. This page is controlled by a Markdown file on Github, and pull requests to improve content are very much welcome!

LTS repo has started, updated Stackage codebase

I've created the LTS Haskell repo. I'm populating it with 0.X releases now as pre-release testing. To reiterate: LTS Haskell is not launched yet, and I will be holding off on an official 1.0 until January 1. So if you have packages you want in LTS, you still have two weeks to get them in.

I've also done a major overhaul of the Stackage codebase itself to make for far more reliable builds. There are lots of details involved, but they're probably not too terribly interesting to most readers. The important takeaways are:

  • Each build is now fully represented by a YAML file that contains a lot of useful metadata
  • There are completely automated executables to create new LTS and nightly releases
  • The codebase is well set up to create reusable binary package databases, if anyone's interested in doing that (I know we'll be using it at FP Complete)

Stopping future GHC 7.6/Haskell Platform builds (?)

This decision is still up for discussion, but my plan is to discontinue Stackage daily builds for GHC 7.6 and GHC 7.8 + Haskell Platform. The reasons are:

  • It puts a large burden on package authors to maintain their packages with old dependencies, which is precisely the opposite of what we want to do with LTS Haskell
  • Very few people are using GHC 7.6
  • There are some fundamental problems with the current Haskell Platform package set. I hope these are addressed- hopefully by unifying with LTS Haskell. But currently, the package sets based on Haskell Platform are inherently buggy by using package versions with known deficiencies.

If you're using a Haskell Platform installed toolchain now, I recommend trying out the new installation instructions to get a toolchain that will be fully compatible with both LTS Haskell and Stackage Nightly.

Future: GHC upgrade policy

One future policy decision we'll need to make is: when do we upgrade to a new version of GHC. My proposed plan is that, once we get a successful nightly build with a new GHC version, we stop generating nightlies for the old version. For LTS Haskell, we'll use whatever version of GHC is used by the nightlies at the time we take a snapshot.

The upshot of this will be that, at any given time, there will be at most two supported GHC versions by the official Stackage snapshots: whatever nightly supports, and the version used by LTS, which may be one version behind.

Episode 9 - Conal Elliott on FRP and Denotational Design

Conal Elliott, inventor of Functional Reactive Programming, tells us about the birth of FRP as well as other stories from his 25 years of functional programming experience. He shares what he considers the fundamentals of FRP (behaviors and events) and how they work in a model with continuous time.

Conal Elliott, inventor of Functional Reactive Programming, tells us about the birth of FRP as well as other stories from his 25 years of functional programming experience. He shares what he considers the fundamentals of FRP (behaviors and events) and how they work in a model with continuous time. We speak about FRP practicality and efficiency, including how a continuous time model can help lead to a high performance implementation. Eventually we're led into Denotational Design, which plays a part in the design and refinement of FRP and which Conal considers his simplest and clearest design tool.

{}
Haskell Weekly News ( Feed )
Thursday, 11 December 2014
Haskell Weekly News: Issue 313
Welcome to issue 313 of the HWN, an issue covering crowd-sourced bits of information about Haskell from around the web. This issue covers from November 16 to December 6, 2014 Quotes of the Week Dijkstra: How do we convince people that in programming simplicity and clarity (in short: what mathem
Welcome to issue 313 of the HWN, an issue covering crowd-sourced bits of information about Haskell from around the web. This issue covers from November 16 to December 6, 2014 Quotes of the Week Dijkstra: How do we convince people that in programming simplicity and clarity (in short: what mathematicians call "elegance") are not a dispensable luxury, but a crucial matter that decides between
{}
FP Complete News ( Feed )
Monday, 08 December 2014
Dropping GHC 7.4 support in FP Haskell Center

When we released FP Haskell Center 3.1, we deprecated our support for GHC 7.4. Till now, we've left that support in place to give users a grace window for upgrading to GHC 7.8. I'm announcing our plans to fully remove GHC 7.4 support in a near release, most likely FP Haskell Center 3.3.

When we released FP Haskell Center 3.1, we deprecated our support for GHC 7.4. Till now, we've left that support in place to give users a grace window for upgrading to GHC 7.8. I'm announcing our plans to fully remove GHC 7.4 support in a near release, most likely FP Haskell Center 3.3.

One of the main reasons we are removing support is that the library versions compatible with GHC 7.4 are no longer being maintained.

If you are still actively using GHC 7.4 on FP Haskell Center, and would like us to consider extending its lifetime, please let us know (via the feedback link at the top of this page) in the next few weeks.

GHC Weekly News - 2014/12/08

Hi *,

Once more, it's time for some news about GHC! This week's regularly scheduled programming (get it?) has brought you...

  • As of last week, GHC officially has no more .lhs files in its source repository; instead, all files have been converted to .hs and are now much more c

Hi *,

Once more, it's time for some news about GHC! This week's regularly scheduled programming (get it?) has brought you...

Closed tickets this week include: #9850, #9005, #9828, #9833, #9582, #8935, #9186, #9480, #9497, #7908, #4347, #3977, #3859, #3844, #3814, #3771, #3739, #2182, #9812, #4921, #7947, #9240, #5401, #3625, #3517, #9444, #9142, #3447, #8894, #3065, #3191, #2697, #2836, #5443, #7736, #2489, #2456, #2204, #9777, #9859, #9869, #9808

{}
FP Complete News ( Feed )
Sunday, 07 December 2014
Backporting bug fixes: Towards LTS Haskell

The concept I'll be describing here is strongly related to GPS Haskell, something Mark, Duncan, and I started working on at ICFP. I'll expand on the relation to that project in the questions section below.

There's a very simple, easily understood problem that I'm sure many of

The concept I'll be describing here is strongly related to GPS Haskell, something Mark, Duncan, and I started working on at ICFP. I'll expand on the relation to that project in the questions section below.

There's a very simple, easily understood problem that I'm sure many of us writing software in Haskell have faced: we want to have a fixed API to write code against, but we also want to get bug fixes against our dependencies. (And dare I say it, possibly even some new features.) Currently, there's no easy way to do that. Curated systems like Stackage provide us with fixed API's to code against, but even to get the benefit of just one tiny new bug fix, we currently need to move over over to a brand new snapshot, providing a brand new API, which crucially, compared to the old API may include arbitrary breaking changes. (The same applies to Linux distributions like Debian and Nix, and to the Haskell Platform as well.)

This is a well understood problem in a different context: Linux distributions themselves. What we have today in Stackage is akin to Debian unstable: a rolling release system where you update your entire environment to a new state of being. Each state of being is internally consistent, but may not be compatible with what you've done locally. In the case of Debian, your config files might break, for instance. In the case of Haskell, your program may no longer compile.

By contrast, we have systems like Ubuntu Long Term Support (LTS). In an LTS release, bug fixes are backported to a stable version for an extended period of time. This allows users to have stability without stagnation. Over the next month, a few of us in the community will be working towards the goal of an experimental "LTS Haskell" kind of project, and hope to have it ready to start testing by January. This blog post is intended to describe how this will work, and encourage people to both provide feedback to improve the system, and to get involved in the project.

The process

On January 1, 2015, we're going to take the most recent Stackage unstable snapshot and promote it to be "LTS Haskell 1.0". It will have its own URL on stackage.org, and will be tracked in a Github repo. On a regular basis (tentatively: once a week), we'll take all of the packages in this snapshot and bump them to the newest version from Hackage with the same major version number (see "example bump" below). We'll then run the full Stackage build/test procedure on this new set of package versions. Once this passes, we'll release this as "LTS Haskell 1.1". That's the whole process.

The significance of this is that every release in the 1.X series will have a backwards-compatible API with previous releases, in the same sense that we're used to with minor version bumps. That means that, barring issues of name collisions, your code will continue to compile with new releases. However, you will also get new features rolled out in minor version bumps of your dependencies and, more importantly, bug fixes that have been released.

After a certain period of time (tentatively: three months, see questions below), we'll again take the newest unstable Stackage snapshot and call that LTS Haskell 2.0. There will be an overlap period where both LTS Haskell 1 and 2 are supported (tentatively: one month), and then LTS Haskell 1 will be retired in favor of LTS Haskell 2. This will give users a chance to upgrade to a new supported release. Note that even after being retired, the old snapshots will still be available for use, the only question is whether bugfixes will still be backported.

Example bump

To clarify the bump procedure, consider the following fictitious set of packages in LTS Haskell 1.0:

  • foo-2.4.1
  • bar-3.2.2
  • baz-5.1.9

After 1.0 is released, the following releases are made to Hackage:

  • foo-2.4.1.1 is released with a bug fix
  • bar-3.2.3 is released with a new feature, which doesn't break backwards compatibility
  • baz-5.2.0 is released, which is a breaking change

In our bumping procedure, we would replace foo-2.4.1 with foo-2.4.1.1, since it has the same major version number. Similarly, bar-3.2.2 would be bumped to 3.2.3. However, baz-5.1.9 would not be bumped to baz-5.2.0, since that introduces a breaking API change. (baz's author, however, would be able to make a baz-5.1.9.1 or baz-5.1.10 release, and those would be included in the next bump.)

Design goals

There are two primary design goals underlying this simple process.

  1. We want the smallest change possible for users, and the smallest amount of work to be created for library authors. To use LTS Haskell, you would just modify your remote-repo, like you do today to use Stackage. (And hopefully in the future, even that will be simplified, once changes are adopted by the Haskell Platform and Hackage.) Library authors already release their code to Hackage with bugfixes. Instead of making them go through a process to get their changes adopted, we will automatically include them.

  2. We want to make the process as automatic as possible. The process listed above allows a new LTS Haskell candidate to be produced with zero human intervention (though some massaging may be necessary for funny situations on Hackage, see questions section below). Making the process automatic makes it that much easier to provide regular releases.

Note that these design goals are built around what's made Stackage such a successful project: minimal author dependencies, simple user story, and automation. I believe we can recreate that success, with even greater benefit now.

A request to library authors

There is one change that library authors can make that would improve the experience: support the current LTS Haskell major version of your packages, and provide bug fixes for them. That means that, if you're the maintainer of foo, LTS Haskell has foo-1.2.1, you've release foo-1.3.0, and a bug is discovered in both the 1.2 and 1.3 versions, please fix the bug with both a foo-1.2.1.1 and foo-1.3.0.1 release. This not only helps LTS Haskell users, but library users in general looking to avoid unnecessary API changes.

Questions

This sounds a lot like GPS Haskell. What's the difference? It should sound very similar; the goal of this project is to be a testing ground for what GPS Haskell will become. GPS Haskell involves multiple moving parts: Stackage, the Haskell Platform, and Hackage. It's difficult to coordinate work among all those parts and keep stability. Stackage is well set up to support experiments like this by having the multiple snapshot concept built in. The goal is to try out this idea, shake out the flaws, and hopefully when we've stabilized it, the Haskell Platform and Hackage will be ready to adopt it.

Ubuntu LTS doesn't allow new features to be added. Why are you allowing new features in addition to bugfixes? I'll admit that I originally argued against adding new features, while Mark was in favor of it. Ultimately, I changed my mind for two reasons: I saw people asking for this exact thing to be present in Stackage, and allowing backporting of new features eases the maintenance burden of library authors considerably, which is an incredible selling point. If there's demand in the future for a bugfix-only version of this, I'd be very much open to exploring the idea. But I think it's pragmatic to start initial investigation with this.

Is LTS Haskell a separate project from Stackage? I'd describe it more as an extension of Stackage, with the goal to ultimately expand to multiple projects: Stackage, the Haskell Platform, and Hackage. Said another way: on a code level, this is clearly an extension of the Stackage tooling. But ideologically, it's trying to adopt the best features of all three of those projects in a way that all of them will be able to take advantage.

Why such short support windows? The strawmen of three months between releases and a one month grace period are ridiculously short support windows. The reason I propose them is because- like I mentioned in the design goals- we want the smallest delta from how people work today. Right now, there is no concept of stable versions, and we're trying to introduce it. Starting off with a more standard support window of something like two years would be a radical shift in how library users and authors operate today. Three months is very short, but it's long enough for us to test the process. As time goes on, we should have serious community discussions on how long a support window we should have. (I, for one, am fully in favor of extending it well beyond three months.)

What kind of funny Hackage situations do you mean above? I mentioned above that manual intervention may sometimes be necessary. Consider the following situation: foo-1.1 depends on bar-1.1, and both are included in LTS Haskell 1.0. bar-1.2 is then released, which by the rules stated above, will not be included in LTS Haskell 1.1. foo-1.1.1 is also released, which should be included. However, suppose that foo-1.1.1 has a lower bound bar >= 1.2. Even though foo itself isn't changing its API, it's demanding an API change for another package. In this case, we'd have to disallow foo-1.1.1 from being included in LTS Haskell 1.1. I'm not sure if we'll be able to automate this kind of detection.

As a side note, I've long considered this a shortcoming of the Package Versioning Policy's stance on when version bumps are required, and have debated proposing a change. I'm still debating that proposal, but wouldn't object if someone else wants to make that proposal instead.

How does this affect Linux distributions? It doesn't necessarily affect them at all. However, LTS Haskell could be a very interesting set of packages for Linux distros to track, for all the same reasons given above regarding backported bugfixes. In this sense, you can think of LTS Haskell as having multiple delivery mechanisms. We're experimenting with one delivery mechanism via stackage.org now; we can have future delivery mechanisms via Debian, Fedora, Nix, and even with direct support in Hackage/cabal-install.

What can I do to help? The areas that jump to mind are:

  • Discuss on the Stackage mailing list, Reddit discussions, etc, to flesh out ideas and shake out flaws early
  • Test the snapshots, especially on Mac and Windows
  • This is a project for the community. Once the initial code gets written, improving the code base is as easy as submitting a pull request!
  • Get more packages into Stackage over the next month, so that LTS Haskell 1.0 is as complete as possible.

Do you have any more details? I originally wrote a two-part blog post with much more detail, but got feedback that the content was a bit too dense, so I rewrote the content in the format here. There's still lots of information present in those blog posts that may be of interest to some, so I've posted them as a Github Gist in case they're useful to anyone. (Note: I'm not aware of contradictions between that Gist and this post. If there are contradictions, this post takes precedence.)

What does this blog post have to do with the recent Stackage survey? Nothing directly, yet. The survey is intended to help gather more information about how people are using Stackage, and to help us make more informed decisions with future Stackage and LTS Haskell work. This blog post was written before I posted the survey, and has not incorporated the survey results in any way. Stay tuned for more information about those results in a separate blog post.

A very general API for relational joins

Maps and tuples are useful data types for modeling relational operations. For example, suppose we have the following table, indexed by the Id column:

| Id | First Name | Last Name |
|----|------------|-----------|
| 0 | Gabriel | Gonzalez |
|

Maps and tuples are useful data types for modeling relational operations. For example, suppose we have the following table, indexed by the Id column:

| Id | First Name | Last Name |
|----|------------|-----------|
| 0 | Gabriel | Gonzalez |
| 1 | Oscar | Boykin |
| 2 | Edgar | Codd |

We can model that as a Map where the key is the Id column and the value is a tuple of FirstName and LastName:

import Data.Map (Map) -- from the `containers` library
import qualified Data.Map as Map

type Id = Int
type FirstName = String
type LastName = String

names :: Map Id (FirstName, LastName)
names = Map.fromList
[ (0, ("Gabriel", "Gonzalez"))
, (1, ("Oscar" , "Boykin" ))
, (2, ("Edgar" , "Codd" ))
]

Now suppose we have another table containing Twitter handles, also indexed by Id:

| Id | Twitter Handle |
|----|----------------|
| 0 | GabrielG439 |
| 1 | posco |
| 3 | avibryant |

We can also encode that as a Map:

type TwitterHandle = String

handles :: Map Id TwitterHandle
handles = Map.fromList
[ (0, "GabrielG439")
, (1, "posco" )
, (3, "avibryant" )
]

One of the nice properties of Maps is that you can join them together:

import Control.Applicative

-- I'm using `join2` to avoid confusion with `Control.Monad.join`
join2 :: Map k v1 -> Map k v2 -> Map k (v1, v2)
join2 = ... -- Implementation elided for now

What if we could generalize join2 to work on types other than Map. Perhaps we could use the Applicative interface to implement join2:

join2 :: Applicative f => f a -> f b -> f (a, b)
join2 = liftA2 (,)

However, the Map type cannot implement Applicative in its current form. The reason why is that there is no sensible definition for pure:

pure :: v -> Map k v
pure = ???

This would require a Map that was defined for every key, which we cannot encode. Or can we?

Tables

Well, who says we need to use the Map type from containers? What if I were to encode my Map this way:

import Prelude hiding (lookup)

-- | A map encoded as a lookup function
newtype Table k v = Table { lookup :: k -> Maybe v }

-- | Encode a traditional map as a lookup function
from :: Ord k => Map k v -> Table k v
from m = Table (\k -> Map.lookup k m)

This new type of Map only permits a single operation: lookup, but because we constrain our API to this single operation we can now implement the full Applicative interface:

instance Functor (Table k) where
fmap f (Table g) = Table (\k -> fmap f (g k))
-- Same as: Table (fmap (fmap f) g)

instance Applicative (Table k) where
pure v = Table (\k -> Just v)
-- Same as: Table (pure (pure v))

Table f <*> Table x = Table (\k -> f k <*> x k)
-- Same as: Table (liftA2 (<*>) f x)

We can promote conventional Maps to this new Table type using the above from function:

names' :: Table Id (FirstName, LastName)
names' = from names

handles' :: Table Id TwitterHandle
handles' = from handles

... and now the more general Applicative join2 will work on these two tables:

>>> let table = join2 names' handles'
>>> :type table
table :: Table Id ((FirstName, LastName), TwitterHandle)
>>> lookup table 0
Just (("Gabriel","Gonzalez"),"GabrielG439")
>>> lookup table 1
Just (("Oscar","Boykin"),"posco")
>>> lookup table 2
Nothing

However, in its present form we can't dump the table's contents because we don't know which keys are present in the table. Let's fix that by adding an additional field to the Table type listing the keys. We will treat functions as being defined for all keys:

import Data.Set (Set)
import qualified Data.Set as Set

data Keys k = All | Some (Set k)

instance Ord k => Num (Keys k) where
fromInteger 0 = Some Set.empty
fromInteger n | n > 0 = All

All + _ = All
_ + All = All
Some s1 + Some s2 = Some (Set.union s1 s2)

All * ks = ks
ks * All = ks
Some s1 * Some s2 = Some (Set.intersection s1 s2)

-- | A map encoded as a lookup function
data Table k v = Table
{ keys :: Keys k
, lookup :: k -> Maybe v
}

-- | Encode a traditional map as a lookup function
from :: Ord k => Map k v -> Table k v
from m = Table
{ keys = Some (Set.fromList (Map.keys m))
, lookup = \k -> Map.lookup k m
}

Even after extending Table with keys we can still implement the Applicative interface:

instance Functor (Table k) where
fmap f (Table ks g) = Table ks (fmap (fmap f) g)

instance Ord k => Applicative (Table k) where
pure v =
Table 1 (pure (pure v))

Table s1 f <*> Table s2 x =
Table (s1 * s2) (liftA2 (<*>) f x)

... and now we can add a Show instance, too!

instance (Show k, Show v) => Show (Table k v) where
show (Table ks f) = case ks of
All -> "<function>"
Some s -> unlines (do
k <- Set.toList s
let Just v = f k
return (show (k, v)) )

Let's give it a test drive:

>>> names'
(0,("Gabriel","Gonzalez"))
(1,("Oscar","Boykin"))
(2,("Edgar","Codd"))

>>> handles'
(0,"GabrielG439")
(1,"posco")
(3,"avibryant")

>>> join2 names' handles'
(0,(("Gabriel","Gonzalez"),"GabrielG439"))
(1,(("Oscar","Boykin"),"posco"))

So far, so good!

Alternative

However, we need to support more than just inner joins. We'd also like to support left, right, and outer joins, too.

Conceptually, a left join is one in which values from the right table may be optionally present. One way we could implement this would be to define a function that converts a finite map to a function defined on all keys. This function will return Nothing for keys not present in the original finite map and Just for keys that were present:

optional :: Table k v -> Table k (Maybe v)
optional (Table ks f) =
Table All (\k -> fmap Just (f k) <|> pure Nothing)

Then we could define leftJoin in terms of join2 and optional:

leftJoin :: Table k a -> Table k b -> Table k (a, Maybe b)
leftJoin t1 t2 = join2 t1 (optional t2)

However, if we try to compile the above code, the compiler will give us a really interesting error message:

Ambiguous occurrence ‘optional’
It could refer to either ‘Main.optional’,
or ‘Control.Applicative.optional’

Apparently, Control.Applicative has an optional function, too. Let's pause to check out the type of this mysterious function:

optional :: Alternative f => f v -> f (Maybe v)

Wow! That type signature is suprisingly similar to the one we wrote. In fact, if Table k implemented the Alternative interface, the types would match.

Alternative is a type class (also provided by Control.Applicative) that greatly resembles the Monoid type class:

class Applicative f => Alternative f where
empty :: f a

(<|>) :: f a -> f a -> f a

... and the core Alternative laws are identical to the Monoid laws:

x <|> empty = x

empty <|> x = x

(x <|> y) <|> z = x <|> (y <|> z)

Let's dig even deeper and see how optional uses this Alternative type class:

optional v = fmap Just v <|> pure Nothing

Even the implementation is eerily similar! This strongly suggests that we should find a way to make our Table type implement Alternative. Perhaps something like this would work:

instance Ord k => Alternative (Table k) where
empty =
Table 0 (pure empty)

Table ks1 f1 <|> Table ks2 f2 =
Table (ks1 + ks2) (liftA2 (<|>) f1 f2)

Compare this to our Applicative instance (duplicated here):

instance Ord k => Applicative (Table k) where
pure v =
Table 1 (pure (pure v))

Table s1 f <*> Table s2 x =
Table (s1 * s2) (liftA2 (<*>) f x)

The Alternative instance mirrors our Applicative instance except that we:

  • replace pure v with empty
  • replace (<*>) with (<|>)
  • replace 1 with 0
  • replace (*) with (+)

... and what's really neat is that now the optional function from Control.Applicative behaves just like the original optional function we wrote. (Exercise: Use equational reasoning to verify this)

Derived joins

Armed with this Alternative instance, we can now implement leftJoin in terms of the optional provided by Control.Applicative:

leftJoin t1 t2 = join2 t1 (optional t2)

Sure enough, it works:

>>> leftJoin names' handles'
(0,(("Gabriel","Gonzalez"),Just "GabrielG439"))
(1,(("Oscar","Boykin"),Just "posco"))
(2,(("Edgar","Codd"),Nothing)

Let's check out the type that the compiler infers for leftJoin:

>>> :type leftJoin
leftJoin :: Alternative f => f a -> f b -> f (a, Maybe b)

Notice how there's no longer anything Table-specific about leftJoin. It works for anything that implements the Alternative interface. I could leftJoin two Maybes if I really wanted to:

>>> leftJoin (Just 1) (Just 2)
Just (1,Just 2)
>>> leftJoin (Just 1) Nothing
Just (1,Nothing)
>>> leftJoin Nothing (Just 1)
Nothing

... or two lists:

>>> leftJoin [1, 2] [3, 4]
[(1,Just 3),(1,Just 4),(1,Nothing),(2,Just 3),(2,Just 4),(2,
Nothing)]

In fact, I don't even really need specialized leftJoin or rightJoin functions. optional is sufficiently light-weight that I could inline a right join on the fly:

>>> join2 (optional names') handles'
(0,(Just ("Gabriel","Gonzalez"),"GabrielG439"))
(1,(Just ("Oscar","Boykin"),"posco"))
(3,(Nothing,"avibryant"))

What happens if I try to do an "outer join"?

>>> -- DISCLAIMER: Technically not the same as an SQL outer join
>>> let o = join2 (optional names') (optional handles')
>>> o
<function>

The above "outer join" is defined for all keys (because both sides are optional), so we get back a function! While we can't list the Table (because it's conceptually infinite), we can still perform lookups on it:

>>> lookup o 0
Just (Just ("Gabriel","Gonzalez"),Just "GabrielG439")
>>> lookup o 2
Just (Just ("Edgar","Codd"),Nothing)
>>> lookup o 3
Just (Nothing,Just "avibryant")
>>> lookup o 4
Just (Nothing, Nothing)

... and if we were to join our "infinite" table against a finite table, we get back a finite table (Exercise: Try it! Define a new finite table to join against o and see what happens)

What's nice about optional is that we can easily left-join or right-join in multiple tables at a time. If I had four tables of types:

t1 :: Table k a
t2 :: Table k b
t3 :: Table k c
t4 :: Table k d

... I could left join t2, t3, and t4 into t1 by just writing:

liftA4 (,,,) t1 (optional t2) (optional t3) (optional t4)
:: Table k (a, Maybe b, Maybe c, Maybe d)

Now that I think about it, I don't even really need to provide join2/join3/join4/join5 since they are not much shorter than using the liftA family of functions in Control.Applicative:

-- Exercise: What would `join1` be?
join2 = liftA2 (,)
join3 = liftA3 (,,)
join4 = liftA4 (,,,)
join5 = liftA5 (,,,,)

In other words, I can implement almost any imaginable join just by using liftA{n} and some permutation of optionals. I don't even know what I'd call this join:

liftA5 (,,,,) t1 (optional t2) t3 (optional t4) t5

... but the beauty is that I don't have to give a name for it. I can easily write anonymous joins on the fly using the Control.Applicative module. Moreover, the above code will work for anything that implements the Alternative interface.

Conclusion

Control.Applicative provides a very general API for relational joins: the Alternative type class (which includes Applicative, since Applicative is a super-class of Alternative). Perhaps Control.Applicative could be improved slightly by providing the join{n} family of functions listed above, but it's still highly usable in its present state.

Note that this trick only works for relational abstractions embedded within Haskell. This API can be generalized for external relational data stores (i.e. Postgres), which I will cover in a subsequent post.

{}
Well-Typed (Haskell Consultants) ( Feed )
Friday, 05 December 2014
Compose conference and New York City Haskell courses
Well-Typed is happy to announce that we are sponsoring > C◦mp◦se conference {#cmpse-conferencecompose} > ------------------ > > Friday, January 30 – Sunday, February 1, 2015, New York City This conference is focused on functional programming and features a keynote by Stephanie Weirich on depend

Well-Typed is happy to announce that we are sponsoring

C◦mp◦se conference

Friday, January 30 – Sunday, February 1, 2015, New York City

This conference is focused on functional programming and features a keynote by Stephanie Weirich on dependent types as well as invited talks by Anthony Cowley, Maxime Ransan and Don Syme, plus a whole lot of additional contributed talks. There’s also an “unconference” with small workshops and tutorials as well as the opportunity to get your hands dirty and try things out yourself.

For several years now, we have been running successful Haskell courses in collaboration in Skills Matter. For the C◦mp◦se conference, we have decided to bring theses courses to New York! You can participate in our Haskell courses directly before or directly after the conference (or both):

Fast Track to Haskell

Thursday, January 29 – Friday, January 30, 2015, New York City

(Don’t worry, there’s no overlap with Stephanie Weirich’s keynote on Friday evening that marks the start of C◦mp◦se.)

You can register here.

This course is for developers who want to learn about functional programming in general or Haskell in particular. It introduces important concepts such as algebraic datatypes, pattern matching, type inference, polymorphism, higher-order functions, explicit effects and, of course, monads and provides a compact tour with lots of hands-on exercises that provide a solid foundation for further adventures into Haskell or functional programming.

Advanced Haskell

Monday, February 2 – Tuesday, February 3, 2015, New York City

You can register here.

This course is for developers who have some experience in Haskell and want to know how to work on larger projects and how things scale. The course covers important topics such as selecting the right data structures for a task at hand, taking the functional perspective into account, and takes a thorough look at Haskell’s “lazy evaluation” and how to reason about time and space performance of Haskell programs. There’s also some focus on how to use Haskell’s powerful abstraction mechanisms and concepts such as Applicative Functors, Monads and Monad Transformers to help you organize larger code bases. Finally, depending on time and demand, there’s the opportunity to look at Parallelism and Concurrency, or at type-level programming. Once again, the course comes with several carefully designed hands-on exercises and provides room for specific questions that the participants might have.

Both courses will be taught by Duncan Coutts, co-founder and partner at Well-Typed. He’s both an experienced teacher and is involved in lots of commercial Haskell development projects at Well-Typed. He’s seen a lot of Haskell code, and knows perfectly which techniques and approaches work and which do not.

Well-Typed training courses

In general, our courses are very practical, but don’t shy away from theory where necessary. Our teachers are all active Haskell developers with not just training experience, but active development experience as well. In addition to these two courses in New York City, we regularly offer courses in London, and plan to offer courses in other European locations, too.

We also provide on-site training on requests nearly anywhere in the world. If you want to know more about our training or have any feedback or questions, have a look at our dedicated training page or just drop us a mail.

{}
Yesod Web Framework News ( Feed )
Wednesday, 03 December 2014
Yesod's new scaffolding

After lots of discussion and testing, I'm happy to announce a significant update to the Yesod scaffolded site. You can see some mailing list discussions on this at:

  • Yesod 1.4 reminder
  • Proposed overhaul to the scaffolding

The newest version of yesod-bin (and all v

After lots of discussion and testing, I'm happy to announce a significant update to the Yesod scaffolded site. You can see some mailing list discussions on this at:

The newest version of yesod-bin (and all versions going forward) will ship with this newly minted scaffolding. There is no migration guide for existing sites; scaffoldings aren't the kind of things to be easily migrated, and all existing sites will continue to function due to lack of breaking API changes. Nonetheless, if people are excited enough about these changes that they'd like to integrate them into their existing sites, please start up a discussion on the mailing list.

The primary motivation in this change is an overhaul to the settings system. The new system is much more modular regarding environment variables, config files, and hard-coded settings. It's also much simpler to determine where a setting value comes from, and to configure things differently in different environments. Since I've already described these changes in quite some detail, I will instead point you to the second mailing list thread linked above.

To give an example of how this works, take a look at the following lines from the new settings.yml file:

host:           "_env:HOST:*4" # any IPv4 host
port:           "_env:PORT:3000"
approot:        "_env:APPROOT:http://localhost:3000"

In the old scaffolding, both PORT and APPROOT were recognized environment variable names, that would override whatever was defined in settings.yml. However, that overriding occurred in a library (Yesod.Default.Config) and was not obvious to users, which led to many bug reports/support questions. Additionally, there was no way to override the host/interface binding via environment variables. In the new system:

  • It's obvious from reading the config file which settings can be overridden via environment variables, and what the names of those variables are.
  • It's trivial to change the names, or disable environment variable overriding (e.g., replace the approot line with approot: http://localhost:3000).
  • New arbitrary settings can be added at will, and easily use environment variables to override their values.

The other change is that we've replaced the ad-hoc Import prelude replacement previously found in the scaffolding with ClassyPrelude.Yesod. For users not interested in classy-prelude, it should be trivial to replace import ClassyPrelude.Yesod with import Prelude, import Yesod, and whatever else you want. Greg has also spent quite some time in the past week improving the documentation for both classy-prelude and mono-traversable, which will help things out considerably.

Besides that, I've done a bunch of minor maintenance on the scaffolding, cleaning up import lists, adding some missing documentation, and so on.

Like any new release, it's certainly possible that there are some issues, so if you notice any, please feel free to bring them up. Also, documentation fixes are great as well. As always, please send any pull requests against the postgres branch of the yesod-scaffold repo, from which we automatically merge changes to all other versions of the scaffolding.

{}
Glasgow Haskell Compiler News ( Feed )
Monday, 01 December 2014
GHC Weekly News - 2014/12/01

Hi *,

It's that time again for some good ol' fashion GHC news, this time just after the holidays. Some of the things happening in the past week include:

  • Partial Type Signatures has been merged into HEAD. Many thanks to Thomas Winant, who worked on this feature for several mo

Hi *,

It's that time again for some good ol' fashion GHC news, this time just after the holidays. Some of the things happening in the past week include:

  • Partial Type Signatures has been merged into HEAD. Many thanks to Thomas Winant, who worked on this feature for several months!
  • As mentioned last week, GHC 7.10 will no longer ship haskell98 and haskell2010, nor old-time or old-locale.

Closed tickets this week include: #9827, #7475, #9826, #7460, #7643, #8044, #8031, #7072, #3654, #7033, #9834, #6098, #6022, #5859, #5763, #9838, #9830, #7243, #9736, #9574, #5158, #9844, #9281, #9818, #4429, #8815, #2182, #4290, #9005, #9828, #9833, #9582, and #9850.

Another huge thanks to Thomas Miedema who closed an extraordinary amount of tickets for us - the above list is still not even complete, and he's made a huge impact on the amount of open tickets in the past month or so.

Experimental package releases via Stackage Server

Right now, Hackage has no concept of a stable and an unstable release of a package. As a result, authors are hesitant to release code to Hackage unless it's already stable. But it's difficult to get people to test new versions of packages if it's difficult to install. Installing a sing

Right now, Hackage has no concept of a stable and an unstable release of a package. As a result, authors are hesitant to release code to Hackage unless it's already stable. But it's difficult to get people to test new versions of packages if it's difficult to install. Installing a single new package from Github may not be difficult, but sometimes you want people to test out a new set of versions for multiple packages, which can be tedious. This blog post will demonstrate how you can use Stackage Server to make that easy.

While the primary purpose of Stackage Server is to host the official Stackage snapshots, it has been designed as completely generic server for hosting any set of packages desired, including custom packages not yet released to Hackage. All you need to do is:

  1. Create an account on Stackage Server (by logging in with Google+ or Mozilla Persona)
  2. Create a tarball in the correct format (described below)
  3. Upload it from the snapshot upload page

Tarball format

You can download a sample bundle file by clicking on the "Bundle" link at the top of any snapshot page. It might be useful to open one up as you looking through the rest of this section.

You can view the tarball parsing code in the Stackage Server codebase itself. The format is designed to be simple to replicate and extensible for future functionality. (In fact, the slug file feature I mention below was only recently added.)

The tarball must be tarred in a format that the tar package can read, and then gzipped. Each file in the tarball is treated indepedently. Directory structure inside the tarball is ignored. Using tar cfz mybundle.tar.gz somedirectory is usually sufficient to meet these criterion.

Each file inside the tarball is treated separately. There are four kinds of files recognized:

  • desc gives the human-readable title and description for the snapshot. Put the title on the first line, and the description on the following lines. (Note that, currently, we only display the title on the site, though we may add the description to the display in the future.)
  • slug is a recommendation for the short name of the snapshot. For example, the most recent GHC 7.8 snapshot as I write this is http://www.stackage.org/snapshot/2014-11-26-ghc78-exc, which has a slug of 2014-11-26-ghc78-exc. Slugs must be globally unique, so if someone else has already taken that slug, Stackage Server will append a randomized token to the end.
  • hackage is a list of all the package/version combos to be included in this snapshot from Hackage. For example, you might have:

    foo-1.0.0
    bar-1.0.1

    You're free to have multiple versions per package.

  • Any file ending in .tar.gz will be treated as a custom sdist tarball, and will be made available for download from stackage.org. This is how you can provide custom versions of a package not released on Hackage. As an example of this, here's a snapshot with two unreleased packages in it.

Custom snapshot

Another use case is customizing an official Stackage snapshot. For example, you may be using a certain snapshot, but want to get a newer version of one of the packages from Hackage, or write a custom patch for one of the package versions and use that. If so, all you need to do is:

  1. Download the bundle file
  2. Tweak its contents
  3. Upload it
  4. Use the new URL

Replace or augment Hackage?

The instructions for using a Stackage snapshot mention replacing the hackage.haskell.org remote-repo line in your cabal config file with the stackage.org URL. This makes sense if you're providing a snapshot that has all the packages from Hackage that you'll need. However, if you're testing out a few new packages, it's simpler to just provide those few extra packages, and add an extra remote-repo line to your config file instead of replacing the primary entry. Note that this trick can be used to augment a Stackage snapshot in addition to adding extra packages to Hackage.

Caveats

You should keep two things in mind when using Stackage Server in this manner:

  • Snapshots you create live forever. In cases of extreme issues (like accidentally uploading copyrighted data) we will of course assist in removing the snapshot. But generally speaking, a snapshot is forever, just like uploading a package to Hackage makes it available forever.
  • All snapshots are publicly listed, so you don't want to put any sensitive information in there. Of course, the Stackage Server codebase is open source, so you're free to run your own, private instance if you'd like. Alternatively, FP Complete provides private Stackage Server instances as a service, feel free to contact us for more information.

Other uses

Creating a generic tool like that has the advantage that it can be (ab)used to purposes other than the original intent of the author. In this case, I've described some intended alternate use cases for this functionality. If people come up with other unintended use cases, let me know!

{}
Well-Typed (Haskell Consultants) ( Feed )
Thursday, 27 November 2014
Haskell development job with Well-Typed
tl;dr If you’d like a job with us, send your application before Christmas. We are looking for a Haskell expert to join our team at Well-Typed. This is a great opportunity for someone who is passionate about Haskell and who is keen to improve and promote Haskell in a professional context. About Well

tl;dr If you’d like a job with us, send your application before Christmas.

We are looking for a Haskell expert to join our team at Well-Typed. This is a great opportunity for someone who is passionate about Haskell and who is keen to improve and promote Haskell in a professional context.

About Well-Typed

We are a team of top notch Haskell experts. Founded in 2008, we were the first company dedicated to promoting the mainstream commercial use of Haskell. To achieve this aim, we help companies that are using or moving to Haskell by providing a range of services including consulting, development, training, and support and improvement of the Haskell development tools. We work with a wide range of clients, from tiny startups to well known multinationals. We have established a track record of technical excellence and satisfied customers.

Our company has a strong engineering culture. All our mangers and decision makers are themselves Haskell developers. Most of us have an academic background and we are not afraid to apply proper computer science to customers’ problems, particularly the fruits of FP and PL research.

We are a self-funded company so we are not beholden to external investors and can concentrate on the interests of our clients, our staff and the Haskell community.

About the job

We are looking for someone to join as a general member of our team, not for a single specific project or task. The work could cover any of the projects and activities that we are involved in as a company. The work may involve:

  • working on GHC, libraries and tools;

  • Haskell application development;

  • working directly with clients to solve their problems;

  • teaching Haskell and developing training materials.

We try wherever possible to arrange tasks within our team to suit peoples’ preferences and to rotate to provide variety and interest.

Well-Typed has a variety of clients. For some we do proprietary Haskell development and consulting. For others, much of the work involves open-source development and cooperating with the rest of the Haskell community: the commercial, open-source and academic users.

Our ideal candidate has excellent knowledge of Haskell, whether from industry, academia or personal interest. Familiarity with other languages, low-level programming and good software engineering practices are also useful. Good organisation and ability to manage your own time and reliably meet deadlines is important. You should also have good communication skills. Being interested or having experience in teaching Haskell (or other technical topics) is a bonus. Experience of consulting or running a business is also a bonus. You are likely to have a bachelor’s degree or higher in computer science or a related field, although this isn’t a requirement.

Offer details

The offer is initially for a one-year full time contract, with the intention of a long term arrangement. We are also happy to consider applications for part-time work. We are a distributed company so everyone works remotely. Living in England is not required. We offer flexible hours. We may be able to offer either employment or sub-contracting, depending on the jurisdiction in which you live. We operate a fair profit share scheme so the salary is variable but with a minimum guarantee. For full time annual equivalent (with the statutory minimum holidays) the guaranteed minimum is GBP 34.8k and based on past performance the expected level is approximately 40% higher.

If you are interested, please apply via info@well-typed.com. Tell us why you are interested and why you would be a good fit for the job, and attach your CV. Please also indicate how soon you might be able to start. We are more than happy to answer informal enquiries. Contact Duncan Coutts (duncan@well-typed.com, dcoutts on IRC) or Andres Löh (andres@well-typed.com, kosmikus on IRC) for further information.

To ensure we can properly consider your application, please get it to us by December 22nd, 2014. We expect to do interviews at the beginning of January.

Image Processing with Comonads
Introduction

A Comonad is a structure from category theory dual to Monad.

Comonads are well-suited for image processing
– Pretty much everyone on the internet

Whenever Comonads come up, people usually mention the canonical example of evaluating cellu

Introduction

A Comonad is a structure from category theory dual to Monad.

Comonads are well-suited for image processing
– Pretty much everyone on the internet

Whenever Comonads come up, people usually mention the canonical example of evaluating cellular automata. Because many image processing algorithms can be modelled as a cellular automaton, this is also a frequently mentioned example.

However, when I was trying to explain Comonads to a friend recently, I couldn’t find any standalone example of how exactly this applies to image processing, so I decided to illustrate this myself.

I will not attempt to explain Comonads, for that I refer to Gabriel Gonzalez’ excellent blogpost. This blogpost is written in literate Haskell so you should be able to just load it up in GHCi and play around with it (you can find the raw .lhs file here).

> {-# LANGUAGE BangPatterns #-}
> import qualified Codec.Picture       as Juicy
> import           Control.Applicative ((<$>))
> import           Data.List           (sort)
> import           Data.Maybe          (fromMaybe, maybeToList)
> import qualified Data.Vector         as V
> import qualified Data.Vector.Generic as VG
> import           Data.Word           (Word8)

A simple image type

We need a simple type for images. Let’s use the great JuicyPixels library to read and write images. Unfortunately, we cannot use the image type defined in JuicyPixels, since JuicyPixels stores pixels in a Storable-based Vector.

We want to be able to store any kind of pixel value, not just Storable values, so we declare our own BoxedImage. We will simply store pixels in row-major order in a boxed Vector.

> data BoxedImage a = BoxedImage
>     { biWidth  :: !Int
>     , biHeight :: !Int
>     , biData   :: !(V.Vector a)
>     }

Because our BoxedImage allows any kind of pixel value, we get a straightforward Functor instance:

> instance Functor BoxedImage where
>     fmap f (BoxedImage w h d) = BoxedImage w h (fmap f d)

Now, we want to be able to convert from a JuicyPixels image to our own BoxedImage and back again. In this blogpost, we will only deal with grayscale images (BoxedImage Word8), since this makes the image processing algorithms mentioned here a lot easier to understand.

> type Pixel = Word8  -- Grayscale
> boxImage :: Juicy.Image Juicy.Pixel8 -> BoxedImage Pixel
> boxImage image = BoxedImage
>     { biWidth  = Juicy.imageWidth image
>     , biHeight = Juicy.imageHeight image
>     , biData   = VG.convert (Juicy.imageData image)
>     }
> unboxImage :: BoxedImage Pixel -> Juicy.Image Juicy.Pixel8
> unboxImage boxedImage = Juicy.Image
>     { Juicy.imageWidth  = biWidth boxedImage
>     , Juicy.imageHeight = biHeight boxedImage
>     , Juicy.imageData   = VG.convert (biData boxedImage)
>     }

With the help of boxImage and unboxImage, we can now call out to the JuicyPixels library:

> readImage :: FilePath -> IO (BoxedImage Pixel)
> readImage filePath = do
>     errOrImage <- Juicy.readImage filePath
>     case errOrImage of
>         Right (Juicy.ImageY8 img) -> return (boxImage img)
>         Right _                   ->
>             error "readImage: unsupported format"
>         Left err                  ->
>             error $ "readImage: could not load image: " ++ err
> writePng :: FilePath -> BoxedImage Pixel -> IO ()
> writePng filePath = Juicy.writePng filePath . unboxImage

Focused images

While we can already write simple image processing algorithms (e.g. tone mapping) using just the Functor interface, the kind of algorithms we are interested in today need take a neighbourhood of input pixels in order to produce a single output pixel.

For this purpose, let us create an additional type that focuses on a specific location in the image. We typically want to use a smart constructor for this, so that we don’t allow focusing on an (x, y)-pair outside of the piBoxedImage.

> data FocusedImage a = FocusedImage
>     { piBoxedImage :: !(BoxedImage a)
>     , piX          :: !Int
>     , piY          :: !Int
>     }

Conversion to and from a BoxedImage is easy:

> focus :: BoxedImage a -> FocusedImage a
> focus bi
>     | biWidth bi > 0 && biHeight bi > 0 = FocusedImage bi 0 0
>     | otherwise                         =
>         error "Cannot focus on empty images"
> unfocus :: FocusedImage a -> BoxedImage a
> unfocus (FocusedImage bi _ _) = bi

And the functor instance is straightforward, too:

> instance Functor FocusedImage where
>     fmap f (FocusedImage bi x y) = FocusedImage (fmap f bi) x y

Now, we can implement the fabled Comonad class:

> class Functor w => Comonad w where
>     extract :: w a -> a
>     extend  :: (w a -> b) -> w a -> w b

The implementation of extract is straightforward. extend is a little trickier. If we look at it’s concrete type:

extend :: (FocusedImage a -> b) -> FocusedImage a -> FocusedImage b

We want to convert all pixels in the image, and the conversion function is supplied as f :: FocusedImage a -> b. In order to apply this to all pixels in the image, we must thus create a FocusedImage for every position in the image. Then, we can simply pass this to f which gives us the result at that position.

> instance Comonad FocusedImage where
>     extract (FocusedImage bi x y) =
>         biData bi V.! (y * biWidth bi + x)
> 
>     extend f (FocusedImage bi@(BoxedImage w h _) x y) = FocusedImage
>         (BoxedImage w h $ V.generate (w * h) $ \i ->
>             let (y', x') = i `divMod` w
>             in f (FocusedImage bi x' y'))
>         x y

Proving that this instance adheres to the Comonad laws is a bit tedious but not that hard if you make some assumptions such as:

V.generate (V.length v) (\i -> v V.! i) = v

We’re almost done with our mini-framework. One thing that remains is that we want to be able to look around in a pixel’s neighbourhood easily. In order to do this, we create this function which shifts the focus by a given pair of coordinates:

> neighbour :: Int -> Int -> FocusedImage a -> Maybe (FocusedImage a)
> neighbour dx dy (FocusedImage bi x y)
>     | outOfBounds = Nothing
>     | otherwise   = Just (FocusedImage bi x' y')
>   where
>     x'          = x + dx
>     y'          = y + dy
>     outOfBounds =
>         x' < 0 || x' >= biWidth bi ||
>         y' < 0 || y' >= biHeight bi

Median filter

If you have ever taken a photo when it is fairly dark, you will notice that there is typically a lot of noise. We’ll start from this photo which I took a couple of weeks ago, and try to reduce the noise in the image using our Comonad-based mini-framework.

The original image

The original image

A really easy noise reduction algorithm is one that looks at a local neighbourhood of a pixel, and replaces that pixel by the median of all the pixels in the neighbourhood. This can be easily implemented using neighbour and extract:

> reduceNoise1 :: FocusedImage Pixel -> Pixel
> reduceNoise1 pixel = median
>     [ extract p
>     | x <- [-2, -1 .. 2], y <- [-2, -1 .. 2]
>     , p <- maybeToList (neighbour x y pixel)
>     ]

Note how our Comonadic function takes the form of w a -> b. With a little intuition, we can see how this is the dual of a monadic function, which would be of type a -> m b.

We used an auxiliary function which simply gives us the median of a list:

> median :: Integral a => [a] -> a
> median xs
>     | odd len   = sort xs !! (len `div` 2)
>     | otherwise = case drop (len `div` 2 - 1) (sort xs) of
>         (x : y : _) -> x `div` 2 + y `div` 2
>         _           -> error "median: empty list"
>   where
>     !len = length xs

So reduceNoise1 is a function which takes a pixel in the context of its neighbours, and returns a new pixel. We can use extend to apply this comonadic action to an entire image:

extend reduceNoise1 :: FocusedImage Pixel -> FocusedImage Pixel
After applying reduceNoise1

After applying reduceNoise1

Running this algorithm on our original picture already gives an interesting result, and the noise has definitely been reduced. However, you will notice that it has this watercolour-like look, which is not what we want.

Blur filter

A more complicated noise-reduction filter uses a blur effect first. We can implement a blur by replacing each pixel by a weighted sum of its neighbouring pixels. At the edges, we just keep the pixels as-is.

This function implements the simple blurring kernel:

> blur :: FocusedImage Pixel -> Pixel
> blur pixel = fromMaybe (extract pixel) $ do
>     let self = fromIntegral (extract pixel) :: Int
>     topLeft     <- extractNeighbour (-1) (-1)
>     top         <- extractNeighbour   0  (-1)
>     topRight    <- extractNeighbour   1  (-1)
>     right       <- extractNeighbour   1    0
>     bottomRight <- extractNeighbour   1    1
>     bottom      <- extractNeighbour   0    1
>     bottomLeft  <- extractNeighbour (-1)   1
>     left        <- extractNeighbour (-1)   0
>     return $ fromIntegral $ (`div` 16) $
>         self * 4 +
>         top * 2 + right * 2 + bottom * 2 + left * 2 +
>         topLeft + topRight + bottomRight + bottomLeft
>   where
>     extractNeighbour :: Int -> Int -> Maybe Int
>     extractNeighbour x y = fromIntegral . extract <$> neighbour x y pixel

The result is the following image:

The image after using blur

The image after using blur

This image contains less noise, but as we expected, it is blurry. This is not unfixable though: if we subtract the blurred picture from the original picture, we get the edges:

The edges in the image

The edges in the image

If we apply a high-pass filter here, i.e., we drop all edges below a certain threshold, such that we only retain the “most significant” edges, we get something like:

The edges after applying a threshold

The edges after applying a threshold

While there is still some noise, we can see that it’s clearly been reduced. If we now add this to the blurred image, we get our noise-reduced image number #2. The noise is not reduced as much as in the first image, but we managed to keep more texture in the image (and not make it look like a watercolour).

After applying reduceNoise2

After applying reduceNoise2

Our second noise reduction algorithm is thus:

> reduceNoise2 :: FocusedImage Pixel -> Pixel
> reduceNoise2 pixel =
>     let !original  = extract pixel
>         !blurred   = blur pixel
>         !edge      = fromIntegral original - fromIntegral blurred :: Int
>         !threshold = if edge < 7 && edge > (-7) then 0 else edge
>     in fromIntegral $ fromIntegral blurred + threshold

We can already see how the Comonad pattern lets us combine extract and blur, and simple arithmetic to achieve powerful results.

A hybrid algorithm

That we are able to compose these functions easily is even more apparent if we try to build a hybrid filter, which uses a weighted sum of the original, reduceNoise1, and reduceNoise2.

> reduceNoise3 :: FocusedImage Pixel -> Pixel
> reduceNoise3 pixel =
>     let !original = extract      pixel
>         !reduced1 = reduceNoise1 pixel
>         !reduced2 = reduceNoise2 pixel
>     in (original `div` 4) + (reduced1 `div` 4) + (reduced2 `div` 2)

The noise here has been reduced significantly, while not making the image look like a watercolour. Success!

After applying reduceNoise3

After applying reduceNoise3

Here is our main function which ties everything up:

> main :: IO ()
> main = do
>     image <- readImage filePath
>     writePng "images/2014-11-27-stairs-reduce-noise-01.png" $
>         unfocus $ extend reduceNoise1 $ focus image
>     writePng "images/2014-11-27-stairs-reduce-noise-02.png" $
>         unfocus $ extend reduceNoise2 $ focus image
>     writePng "images/2014-11-27-stairs-reduce-noise-03.png" $
>         unfocus $ extend reduceNoise3 $ focus image
>   where
>     filePath = "images/2014-11-27-stairs-original.png"

And here is a 300% crop which should show the difference between the original (left) and the result of reduceNoise3 (right) better:

Conclusion

I hope this example has given some intuition as to how Comonads can be used in real-world scenarios. For me, what made the click was realising how w a -> b for Comonad relates to a -> m b for Monad, and how these types of functions naturally compose well.

Additionally, I hope this blogpost provided some insight the image processing algorithms as well, which I also think is an interesting field.

Thanks to Alex Sayers for proofreading!

{}
Gabriel Gonzalez's Blog (Haskell for all) ( Feed )
Monday, 24 November 2014
How to build library-agnostic streaming sources

The Haskell ecosystem has numerous libraries for effectful stream programming, including, but not limited to:

  • List
  • conduit
  • enumerator
  • io-streams
  • iterIO
  • iteratee

The Haskell ecosystem has numerous libraries for effectful stream programming, including, but not limited to:

  • List
  • conduit
  • enumerator
  • io-streams
  • iterIO
  • iteratee
  • list-t
  • logict
  • machines
  • pipes

Unfortunately, this poses a problem for library writers. Which streaming library should they pick when providing effectful streaming components?

Sometimes the correct answer is: none of the above! We can often build streaming and effectful generators using only the base and transformers libraries!

The trick is to build polymorphic generators using only the MonadPlus and MonadTrans type classes. These generators can then be consumed by any library that provides an implementation of ListT that implements MonadPlus and MonadTrans.

MonadPlus

I like to think of MonadPlus as the "list building" type class. This is because you can assemble a list using return, mzero, and mplus:

>>> import Control.Monad (MonadPlus(..))
>>> mzero :: [Int]
[]
>>> return 1 :: [Int]
[1]
>>> return 1 `mplus` return 2 :: [Int] -- [1] ++ [2]
[1, 2]

In other words, mzero is analogous to [], mplus is analogous to (++), and return builds a singleton list.

However, many things other than lists implement MonadPlus, including every implementation of ListT in the wild. Therefore, if we build collections using MonadPlus operations these collections will type-check as ListT streams as well.

Let's provide the following helper function to convert a list to this more general MonadPlus type:

select :: MonadPlus m => [a] -> m a
select [] = mzero
select (x:xs) = return x `mplus` select xs

-- or: select = foldr (\x m -> return x `mplus` m) mzero

Note that this select function has some nice mathematical properties:

select (xs `mplus` ys) = (select xs) `mplus` (select ys)
select mzero = mzero

-- This assumes the distributivity law for `MonadPlus`
select . (f >=> g) = (select . f) >=> (select . g)
select . return = return

MonadTrans

Using select and lift (from MonadTrans), we can build list comprehensions that interleave effects, like this:

example :: (MonadTrans t, MonadPlus (t IO)) => t IO ()
example = do
x <- select [1, 2, 3]
lift (putStrLn ("x = " ++ show x))
y <- select [4, 5, 6]
lift (putStrLn ("y = " ++ show y))

You can read the above code as saying:

  • Let x range from 1 to 3
  • Print x every time it selects a new value
  • For each value of x, let y range from 4 to 6
  • Print y every time it selects a new value

Notice how example doesn't depend on any particular streaming library, so we can run it with diverse ListT implementations, all of which implement MonadPlus and MonadTrans:

>>> Pipes.runListT example  -- This requires `pipes-4.1.4`
x = 1
y = 4
y = 5
y = 6
x = 2
y = 4
y = 5
y = 6
x = 3
y = 4
y = 5
y = 6
>>> _ <- Control.Monad.Logic.observeAllT example
<Exact same output>
>>> _ <- ListT.toList example
<Exact same output>

However, we can use this trick for more than just list comprehensions. We can build arbitrary lazy and effectful streams this way!

Generators

Here's an example of a generator that lazily emits lines read from standard input:

import Control.Monad (MonadPlus(..))
import Control.Monad.Trans.Class (MonadTrans(..))
import System.IO (isEOF)

stdinLn :: (MonadTrans t, MonadPlus (t IO)) => t IO String
stdinLn = do
eof <- lift isEOF
if eof
then mzero
else lift getLine `mplus` stdinLn

You can read the above code as saying:

  • Check if we are at the end of the file
  • If we're done, then return an empty list
  • If we're not done, prepend a getLine onto a recursive call to stdinLn

We can prove this works by writing a program that forwards these lines to standard output:

echo :: (MonadTrans t, MonadPlus (t IO)) => t IO ()
echo = do
str <- stdinLn
lift (putStrLn str)

Now we can run echo using any of our favorite ListT implementations and it will do the right thing, streaming lines lazily from standard input to standard output in constant space:

>>> Pipes.runListT echo
Test<Enter>
Test
ABC<Enter>
ABC
42<Enter>
42
<Ctrl-D>
>>>

The exception is the transformers library, whose ListT implementation does not stream or run in constant space.

Combinators

We can also implement lazy variations on Control.Monad combinators using this interface.

For example, we can implement a lazy variation on replicateM using just select and replicate:

replicateM' :: MonadPlus m => Int -> m a -> m a
replicateM' n m = do
m' <- select (replicate n m)
m'

-- or: replicateM' n = join . select . replicate n

We can use this lazy replicateM' to lazily echo 10 lines from standard input to standard output:

example :: (MonadTrans t, MonadPlus (t IO)) => t IO ()
example = do
str <- replicateM' 10 (lift getLine)
lift (putStrLn str)

We can also implement a lazy mapM and forM, too, except now their implementations are so trivial that they don't even deserve their own functions:

mapM' :: Monad m => (a -> m b) -> m a -> m b
mapM' = (=<<)

forM' :: MOnad m => m a -> (a -> m b) -> m a
forM' = (>>=)

example :: (MonadTrans t, MonadPlus (t IO)) => t IO ()
example = mapM' (lift . print) (replicateM' 10 (lift getLine))

Similarly, a lazy sequence just becomes join.

Conclusion

The following streaming libraries already provide their own implementation of ListT compatible with the above trick:

  • List
  • list-t
  • LogicT
  • pipes

The other streaming libraries do not currently provide a ListT implementation, but there is no reason why they couldn't! For example, conduit could implement an internal ListT type of its own and use that as an intermediate type for converting the above abstraction to the public Source type:

convert :: Monad m
=> Data.Conduit.Internal.ListT m a -> Source m a

This polymorphic API obviously does not capture all possible streaming patterns. For example, you can not implement something like take using this API. However, I suspect that a significant number of streaming components can be written using this dependency-free interface.

Edit: Twan van Laarhoven pointed out that you can sometimes use MonadIO instead of MonadTrans, which produces nicer constraints

{}
Glasgow Haskell Compiler News ( Feed )
Saturday, 22 November 2014
GHC Weekly News - 2014/11/21

Hi *,

To get things back on track, we have a short post following up the earlier one this week. It's been busy today so I'll keep it short:

  • The STABLE freeze Austin announced two weeks ago is happening now, although at this point a few things we wanted to ship are just 98% r

Hi *,

To get things back on track, we have a short post following up the earlier one this week. It's been busy today so I'll keep it short:

  • The STABLE freeze Austin announced two weeks ago is happening now, although at this point a few things we wanted to ship are just 98% ready. So it may wait until Monday.
  • HEAD has been very busy the past two days as many things are now trying to merge as closely to the window as possible. Some notes follow.
  • HEAD now has support for using the 'deriving' clause for arbitrary classes (see #5462).
  • HEAD now has 64bit iOS and SMP support for ARM64, thanks to Luke Iannini. See #7942.
  • base now exports a new module for Natural numbers called Numeric.Natural following Herbert Valerio Riedel's recent proposal.
  • Your author has been busy and delayed due to some bad travel experiences the past week, so the 7.8.4 RC1 hasn't landed this past week. Hopefully it will be out by the end of this week still.

Since the last update was only a few days ago, you'd think we haven't closed a lot of tickets, but we have! Thomas Miedema has been very very persistent about closing tickets and cleaning them up, which is greatly appreciated: #9810, #8324, #8310, #9396, #9626, #9776, #9807, #9698, #7942, #9703, #8584, #8968, #8174, #9812, #9209, #9220, #9151, #9201, #9318, #9109, #9126, #8406, #8102, #8093, #8085, #8068, #8094, #9590, #9368, #2526, #9569, #8149, #9815, #5462, #9647, #8568, #9293, #7484, #1476, #9824, #9628, #7942

{}
Haskell Weekly News ( Feed )
Thursday, 20 November 2014
Haskell Weekly News: Issue 312
Welcome to issue 312 of the HWN, an issue covering crowd-sourced bits of information about Haskell from around the web. This issue covers from October 26 to November 15, 2014 Quotes of the Week barsoap: There's no place for half measures in overkill. Top Reddit Stories Category Theory for Pr
Welcome to issue 312 of the HWN, an issue covering crowd-sourced bits of information about Haskell from around the web. This issue covers from October 26 to November 15, 2014 Quotes of the Week barsoap: There's no place for half measures in overkill. Top Reddit Stories Category Theory for Programmers: The Preface. From (bartoszmilewski.com), scored 141 with 68 comments. Idris 0.9.15
{}
Glasgow Haskell Compiler News ( Feed )
Tuesday, 18 November 2014
GHC Weekly News 2014/11/18

Hello *,

Once more we have the GHC Weekly news! This one is a bit late due to Austin being in limbo unexpectedly for a few days last week. (The next one will of course come again on Friday to keep things straight.)

With that out of the way, let's see what exactly is going on:

Hello *,

Once more we have the GHC Weekly news! This one is a bit late due to Austin being in limbo unexpectedly for a few days last week. (The next one will of course come again on Friday to keep things straight.)

With that out of the way, let's see what exactly is going on:

  • The STABLE freeze is happening at the end of this week! That means if you have something you want to get in, try to get people aware of it! Austin (yours truly) has a backed up review queue it would seem, but hopes to clear a bunch of it out before then.
  • Herbert Valerio Riedel has finally landed integer-gmp2, AKA Phab:D86, which implements a complete overhaul of the integer-gmp library. This library will be switched on by default in GHC 7.10.1, which means the integer-gmp library version will have a super-major bump (version 1.0.0.0). This is the beginning of a longer-term vision for more flexible Integer support in GHC, as described by Herbert on the design page: https://ghc.haskell.org/trac/ghc/wiki/Design/IntegerGmp2 This implementation also fixes a long standing pain point where GHC would hook GMP allocations to exist on the GHC heap. Now GMP is just called to like any FFI library.
  • Jan Stolarek made a heads up to help out GHC newcomers: if you see a ticket that should be easy, please tag it with the newcomer keyword! This will let us have a live search of bugs that new developers can take over. (Incidentally, Joachim mentions this is the same thing Debian is doing in their bug tracker): https://www.haskell.org/pipermail/ghc-devs/2014-November/007313.html
  • Adam Gundry, Eric Seidel, and Iavor Diatchki have grouped together to get a new, unexpected feature into 7.10: type checking plugins. Now, GHC will be able to load a regular Haskell package as a plugin during the compilation process. Iavor has a work-in-progress plugin that solves constraints for type-level natural numbers using a SMT solver. The code review from everyone was published in Phab:D489.

Closed tickets this week include: #9785, #9053, #9513, #9073, #9077, #9683, #9662, #9646, #9787, #8672, #9791, #9781, #9621, #9594, #9066, #9527, #8100, #9064, #9204, #9788, #9794, #9608, #9442, #9428, #9763, #9664, #8750, #9796, #9341, #9330, #9323, #9322, #9749, #7381, #8701, #9286, #9802, #9800, #9302, #9174, #9171, #9141, #9100, #9134, #8798, #8756, #8716, #8688, #8680, #8664, #8647, #9804, #8620, #9801, #8559, #8559, #8545, #8528, #8544, #8558


pluto/1.3.2 - Ruby/2.0.0 (2014-11-13/x86_64-linux) on Sinatra/1.4.5 (production)