Planet Programming Languages Last Update: Tuesday, 17. October 2017 07:00

Go Lang News - Dec 22

Generating code

A property of universal computation—Turing completeness—is that a computer program can write a computer program. This is a powerful idea that is not appreciated as often as it might be, even though it happens frequently. It's a big part of the definition of a compiler, for instance. It's also how the go test command works: it scans the packages to be tested, writes over a year ago

A property of universal computation—Turing completeness—is that a computer program can write a computer program. This is a powerful idea that is not appreciated as often as it might be, even though it happens frequently. It's a big part of the definition of a compiler, for instance. It's also how the go test command works: it scans the packages to be tested, writes out a Go program containing a test harness customized for the package, and then compiles and runs it. Modern computers are so fast this expensive-sounding sequence can complete in a fraction of a second.

There are lots of other examples of programs that write programs. Yacc, for instance, reads in a description of a grammar and writes out a program to parse that grammar. The protocol buffer "compiler" reads an interface description and emits structure definitions, methods, and other support code. Configuration tools of all sorts work like this too, examining metadata or the environment and emitting scaffolding customized to the local state.

Programs that write programs are therefore important elements in software engineering, but programs like Yacc that produce source code need to be integrated into the build process so their output can be compiled. When an external build tool like Make is being used, this is usually easy to do. But in Go, whose go tool gets all necessary build information from the Go source, there is a problem. There is simply no mechanism to run Yacc from the go tool alone.

Until now, that is.

The latest Go release, 1.4, includes a new command that makes it easier to run such tools. It's called go generate, and it works by scanning for special comments in Go source code that identify general commands to run. It's important to understand that go generate is not part of go build. It contains no dependency analysis and must be run explicitly before running go build. It is intended to be used by the author of the Go package, not its clients.

The go generate command is easy to use. As a warmup, here's how to use it to generate a Yacc grammar. Say you have a Yacc input file called gopher.y that defines a grammar for your new language. To produce the Go source file implementing the grammar, you would normally invoke the standard Go version of Yacc like this:

go tool yacc -o gopher.go -p parser gopher.y

The -o option names the output file while -p specifies the package name.

To have go generate drive the process, in any one of the regular (non-generated) .go files in the same directory, add this comment anywhere in the file:

//go:generate go tool yacc -o gopher.go -p parser gopher.y

This text is just the command above prefixed by a special comment recognized by go generate. The comment must start at the beginning of the line and have no spaces between the // and the go:generate. After that marker, the rest of the line specifies a command for go generate to run.

Now run it. Change to the source directory and run go generate, then go build and so on:

$ cd $GOPATH/myrepo/gopher
$ go generate
$ go build
$ go test

That's it. Assuming there are no errors, the go generate command will invoke yacc to create gopher.go, at which point the directory holds the full set of Go source files, so we can build, test, and work normally. Every time gopher.y is modified, just rerun go generate to regenerate the parser.

For more details about how go generate works, including options, environment variables, and so on, see the design document.

Go generate does nothing that couldn't be done with Make or some other build mechanism, but it comes with the go tool—no extra installation required—and fits nicely into the Go ecosystem. Just keep in mind that it is for package authors, not clients, if only for the reason that the program it invokes might not be available on the target machine. Also, if the containing package is intended for import by go get, once the file is generated (and tested!) it must be checked into the source code repository to be available to clients.

Now that we have it, let's use it for something new. As a very different example of how go generate can help, there is a new program available in the golang.org/x/tools repository called stringer. It automatically writes string methods for sets of integer constants. It's not part of the released distribution, but it's easy to install:

$ go get golang.org/x/tools/cmd/stringer

Here's an example from the documentation for stringer. Imagine we have some code that contains a set of integer constants defining different types of pills:

package painkiller

type Pill int

const (
    Placebo Pill = iota
    Aspirin
    Ibuprofen
    Paracetamol
    Acetaminophen = Paracetamol
)

For debugging, we'd like these constants to pretty-print themselves, which means we want a method with signature,

func (p Pill) String() string

It's easy to write one by hand, perhaps like this:

func (p Pill) String() string {
    switch p {
    case Placebo:
        return "Placebo"
    case Aspirin:
        return "Aspirin"
    case Ibuprofen:
        return "Ibuprofen"
    case Paracetamol: // == Acetaminophen
        return "Paracetamol"
    }
    return fmt.Sprintf("Pill(%d)", p)
}

There are other ways to write this function, of course. We could use a slice of strings indexed by Pill, or a map, or some other technique. Whatever we do, we need to maintain it if we change the set of pills, and we need to make sure it's correct. (The two names for paracetamol make this trickier than it might otherwise be.) Plus the very question of which approach to take depends on the types and values: signed or unsigned, dense or sparse, zero-based or not, and so on.

The stringer program takes care of all these details. Although it can be run in isolation, it is intended to be driven by go generate. To use it, add a generate comment to the source, perhaps near the type definition:

//go:generate stringer -type=Pill

This rule specifies that go generate should run the stringer tool to generate a String method for type Pill. The output is automatically written to pill_string.go (a default we could override with the -output flag).

Let's run it:

$ go generate
$ cat pill_string.go
// generated by stringer -type Pill pill.go; DO NOT EDIT

package pill

import "fmt"

const _Pill_name = "PlaceboAspirinIbuprofenParacetamol"

var _Pill_index = [...]uint8{0, 7, 14, 23, 34}

func (i Pill) String() string {
    if i < 0 || i+1 >= Pill(len(_Pill_index)) {
        return fmt.Sprintf("Pill(%d)", i)
    }
    return _Pill_name[_Pill_index[i]:_Pill_index[i+1]]
}
$

Every time we change the definition of Pill or the constants, all we need to do is run

$ go generate

to update the String method. And of course if we've got multiple types set up this way in the same package, that single command will update all their String methods with a single command.

There's no question the generated method is ugly. That's OK, though, because humans don't need to work on it; machine-generated code is often ugly. It's working hard to be efficient. All the names are smashed together into a single string, which saves memory (only one string header for all the names, even if there are zillions of them). Then an array, _Pill_index, maps from value to name by a simple, efficient technique. Note too that _Pill_index is an array (not a slice; one more header eliminated) of uint8, the smallest integer sufficient to span the space of values. If there were more values, or there were negatives ones, the generated type of _Pill_index might change to uint16 or int8: whatever works best.

The approach used by the methods printed by stringer varies according to the properties of the constant set. For instance, if the constants are sparse, it might use a map. Here's a trivial example based on a constant set representing powers of two:

const _Power_name = "p0p1p2p3p4p5..."

var _Power_map = map[Power]string{
    1:    _Power_name[0:2],
    2:    _Power_name[2:4],
    4:    _Power_name[4:6],
    8:    _Power_name[6:8],
    16:   _Power_name[8:10],
    32:   _Power_name[10:12],
    ...,
}

func (i Power) String() string {
    if str, ok := _Power_map[i]; ok {
        return str
    }
    return fmt.Sprintf("Power(%d)", i)
}

In short, generating the method automatically allows us to do a better job than we would expect a human to do.

There are lots of other uses of go generate already installed in the Go tree. Examples include generating Unicode tables in the unicode package, creating efficient methods for encoding and decoding arrays in encoding/gob, producing time zone data in the time package, and so on.

Please use go generate creatively. It's there to encourage experimentation.

And even if you don't, use the new stringer tool to write your String methods for your integer constants. Let the machine do the work.

over a year ago

Rust Lang News - Dec 12

Yehuda Katz and Steve Klabnik are joining the Rust Core Team

I’m pleased to announce that Yehuda Katz and Steve Klabnik are joining the Rust core team. Both of them are not only active and engaged members of the Rust community, but they also bring a variety of skills and experience with them.

Yehuda Katz will be known to many in the Rust community for his work on the initial design and implementation of the Cargo project. He is also a co-founder o over a year ago

I’m pleased to announce that Yehuda Katz and Steve Klabnik are joining the Rust core team. Both of them are not only active and engaged members of the Rust community, but they also bring a variety of skills and experience with them.

Yehuda Katz will be known to many in the Rust community for his work on the initial design and implementation of the Cargo project. He is also a co-founder of Tilde, which has been using Rust commercially in their Skylight product for quite some time. Finally, he has been heavily involved with the Ruby ecosystem (through such projects as Ruby on Rails and Bundler) and with JavaScript as well (through the Ember.js and jQuery frameworks and the TC39 language committee).

Steve Klabnik is of course the primary author of the Rust guide as well as much of Rust’s documentation (not to mention independent works like Rust for Rubyists). He is passionate about improving the learnability of Rust and ensuring that the onboarding experience is smooth. Finally, Steve is an enthusiastic and tireless communicator. Wherever there is discussion about Rust to be found, be it IRC, the RFCs repo, or Reddit/HackerNews, you can be sure to find a comment or two from Steve there, explaining and clarifying the situation.

Thanks Yehuda and Steve for all your hard work, and welcome to the core team!

over a year ago

Erland News - Dec 11

Erlang OTP 17.4 has been released

Erlang/OTP 17.4 is a service release on the 17 track with mostly bug fixes, but is does contain a number of new features and characteristics improvements as well. 

Some highlights of the release are:

  • eldap: Nearly all TCP options are possible to give in the eldap:open/2 call.
  • ssh: Added API functions ptty_alloc/3 and ptty_alloc/4, to allocate a ps over a year ago

Erlang/OTP 17.4 is a service release on the 17 track with mostly bug fixes, but is does contain a number of new features and characteristics improvements as well. 

Some highlights of the release are:

  • eldap: Nearly all TCP options are possible to give in the eldap:open/2 call.
  • ssh: Added API functions ptty_alloc/3 and ptty_alloc/4, to allocate a pseudo tty.
  • ssl: Handle servers that may send an empty SNI extension to the client.

Many thanks to the 33 different contributors in this release

You can find more detailed info and download the release at the download page

over a year ago

Lambda the Ultimate - Nov 26

John C Reynolds Doctoral Dissertation Award nominations for 2014

Presented annually to the author of the outstanding doctoral dissertation in the area of Programming Languages. The award includes a prize of $1,000. The winner can choose to receive the award at ICFP, OOPSLA, POPL, or PLDI.

I guess it is fairly obvious why professors should propose their students (the deadline is January 4th 2015). Newly minted PhD should, for simi over a year ago

Presented annually to the author of the outstanding doctoral dissertation in the area of Programming Languages. The award includes a prize of $1,000. The winner can choose to receive the award at ICFP, OOPSLA, POPL, or PLDI.

I guess it is fairly obvious why professors should propose their students (the deadline is January 4th 2015). Newly minted PhD should, for similar reasons, make sure their professors are reminded of these reasons. I can tell you that the competition is going to be tough this year; but hey, you didn't go into programming language theory thinking it is going to be easy, did you?

over a year ago

Rust Lang News - Nov 20

Cargo: Rust's community crate host

Today it is my pleasure to announce that crates.io is online and ready for action. The site is a central location to discover/download Rust crates, and Cargo is ready to start publishing to it today. For the next few months, we are asking that intrepid early adopters help us get the registry battle-tested.

Until Rust itself is stable early next year, registry dependencies will need to be over a year ago

Today it is my pleasure to announce that crates.io is online and ready for action. The site is a central location to discover/download Rust crates, and Cargo is ready to start publishing to it today. For the next few months, we are asking that intrepid early adopters help us get the registry battle-tested.

Until Rust itself is stable early next year, registry dependencies will need to be updated often. Production users may want to continue using git dependencies until then.

What is Cargo?

Cargo is a package manager for Rust, in Rust. Managing dependencies is a fundamentally difficult problem, but fortunately over the last decade there’s been a lot of progress in the design of package managers. Designed by Carl Lerche and Yehuda Katz, Cargo follows the tradition of successes like Bundler and NPM:

  1. Cargo leverages crates.io to foster a thriving community of crates that can easily interoperate with one another and last for years to come.

  2. Cargo releases developers from the worry of managing dependencies and ensures that all collaborators are building the same code.

  3. Cargo lets your dependencies say how they should be built, and manages the entire build process for you.

A Community on Cargo

To get a feel for how Cargo achieves its goals, let’s take a look at some of its core mechanics.

Declaring Dependencies

Cargo makes depending on third-party code as easy as depending on the standard library. When using Cargo, each crate will have an associated manifest to describe itself and its dependencies. Adding a new dependency is now as simple as adding one line to the manifest, and this ease has allowed Cargo in just a few short months to enable a large and growing network of Rust projects and libraries which were simply infeasible before.

Cargo alone, however, is not quite the entire solution. Discovering dependencies is still difficult, and ensuring that these dependencies are available for years to come is also not guaranteed.

crates.io

To pair with Cargo, the central crates.io site serves as a single location for publishing and discovering libraries. This repository serves as permanent storage for releases of crates over time to ensure that projects can always build with the exact same versions years later. Up until now, users of Cargo have largely just downloaded dependencies directly from the source GitHub repository, but the primary source will now be shifting to crates.io.

Other programming language communities have been quite successful with this form of central repository. For example rubygems.org is your one-stop-shop for Bundler dependencies and npmjs.org has had over 600 million downloads in just this month alone! We intend for crates.io to serve a similar role for Rust as a critical piece of infrastructure for Rust’s long-term stability story at 1.0.

Versioning and Reproducible Builds

Over the past few years, the concept of Semantic Versioning has gained traction as a way for library developers to easily and clearly communicate with users when they make breaking changes. The core idea of semantic versioning is simple: each new release is categorized as a minor or major release, and only major releases can introduce breakage. Cargo allows you to specify version ranges for your dependencies, with the default meaning of “compatible with”.

When specifying a version range, applications often end up requesting multiple versions of a single crate, and Cargo solves this by selecting the highest version of each major version (“stable code”) requested. This highly encourages using stable distributions while still allowing duplicates of unstable code (pre-1.0 and git for example).

Once the set of dependencies and their versions have been calculated, Cargo generates a Cargo.lock to encode this information. This “lock file” is then distributed to collaborators of applications to ensure that the crates being built remain the same from one build to the next, across times, machines, and environments.

Building Code

Up to this point we’ve seen how Cargo facilitates discovery and reuse of community projects while managing what versions to use. Now Cargo just has to deal with the problem of actually compiling all this code!

With a deep understanding of the Rust code that it is building, Cargo is able to provide some nice standard features as well as some Rust-specific features:

  • By default, Cargo builds as many crates in parallel as possible. This not only applies to upstream dependencies being built in parallel, but also items for the local crate such as test suites, binaries, and unit tests.

  • Cargo supports unit testing out of the box both for crates themselves and in the form of integration tests. This even includes example programs to ensure they don’t bitrot.

  • Cargo generates documentation for all crates in a dependency graph, and it can even run Rust’s documentation tests to ensure examples in documentation stay up to date.

  • Cargo can run a build script before any crate is compiled to perform tasks such as code generation, compiling native dependencies, or detecting native dependencies on the local system.

  • Cargo supports cross compilation out of the box. Cross compiling is done by simply specifying a --target options and Cargo will manage tasks such as compiling plugins and other build dependencies for the right platform.

What else is in store?

The launch of crates.io is a key step in moving the Cargo ecosystem forward, but the story does not end here. Usage of crates.io is architected assuming a stable compiler, which should be coming soon! There are also a number of extensions to crates.io such as a hosted documentation service or a CI build infrastructure hook which could be built out using the crates.io APIs.

This is just the beginning for crates.io, and I’m excited to start finding all Rust crates from one location. I can’t wait to see what the registry looks like at 1.0, and I can only fathom what it will look like after 1.0!

over a year ago

Erland News - Nov 13

Calling for help to improve contents on erlang.org

We are aware that parts of erlang.org need improvement. For example www.erlang.org/article/tag/examples and www.erlang.org/course/course.html are outdated. We would like to see a number of small code examples for beginners. The purpose of these examples is to provide an attractive and useful introduction for people who are interested in adopting the Erlang programming language. 

over a year ago

We are aware that parts of erlang.org need improvement. For example http://www.erlang.org/article/tag/examples and http://www.erlang.org/course/course.html are outdated. We would like to see a number of small code examples for beginners. The purpose of these examples is to provide an attractive and useful introduction for people who are interested in adopting the Erlang programming language. 

Please send your input to community-manager@erlang.org. We would like to call for help from the community since OTP team does not have too much time and it is not possible to submit pull requests for editorial of erlang.org as of now. 

Any other suggestions for erlang.org are always welcome.

over a year ago

Go Lang News - Nov 10

Half a decade with Go

Five years ago we launched the Go project. It seems like only yesterday that we were preparing the initial public release: our website was a lovely shade of yellow, we were calling Go a "systems language", and you had to terminate statements with a semicolon and write Makefiles to build your code. We had no idea how Go would be received. Would people share our vision and goals? Would people over a year ago

Five years ago we launched the Go project. It seems like only yesterday that we were preparing the initial public release: our website was a lovely shade of yellow, we were calling Go a "systems language", and you had to terminate statements with a semicolon and write Makefiles to build your code. We had no idea how Go would be received. Would people share our vision and goals? Would people find Go useful?

At launch, there was a flurry of attention. Google had produced a new programming language, and everyone was eager to check it out. Some programmers were turned off by Go's conservative feature set—at first glance they saw "nothing to see here"—but a smaller group saw the beginnings of an ecosystem tailored to their needs as working software engineers. These few would form the kernel of the Go community.

Gopher illustration by Renee French

After the initial release, it took us a while to properly communicate the goals and design ethos behind Go. Rob Pike did so eloquently in his 2012 essay Go at Google: Language Design in the Service of Software Engineering and more personally in his blog post Less is exponentially more. Andrew Gerrand's Code that grows with grace (slides) and Go for Gophers (slides) give a more in-depth, technical take on Go's design philosophy.

Over time, the few became many. The turning point for the project was the release of Go 1 in March 2012, which provided a stable language and standard library that developers could trust. By 2014, the project had hundreds of core contributors, the ecosystem had countless libraries and tools maintained by thousands of developers, and the greater community had many passionate members (or, as we call them, "gophers"). Today, by our current metrics, the Go community is growing faster than we believed possible.

Where can those gophers be found? They are at the many Go events that are popping up around the world. This year we saw several dedicated Go conferences: the inaugural GopherCon and dotGo conferences in Denver and Paris, the Go DevRoom at FOSDEM and two more instances of the biannual GoCon conference in Tokyo. At each event, gophers from around the globe eagerly presented their Go projects. For the Go team, it is very satisfying to meet so many programmers that share our vision and excitement.

More than 1,200 gophers attended GopherCon in Denver and dotGo in Paris.

There are also dozens of community-run Go User Groups spread across cities worldwide. If you haven't visited your local group, consider going along. And if there isn't a group in your area, maybe you should start one?

Today, Go has found a home in the cloud. Go arrived as the industry underwent a tectonic shift toward cloud computing, and we were thrilled to see it quickly become an important part of that movement. Its simplicity, efficiency, built-in concurrency primitives, and modern standard library make it a great fit for cloud software development (after all, that's what it was designed for). Significant open source cloud projects like Docker and Kubernetes have been written in Go, and infrastructure companies like Google, CloudFlare, Canonical, Digital Ocean, GitHub, Heroku, and Microsoft are now using Go to do some heavy lifting.

So, what does the future hold? We think that 2015 will be Go's biggest year yet.

Go 1.4—in addition to its new features and fixes—lays the groundwork for a new low-latency garbage collector and support for running Go on mobile devices. It is due to be released on December 1st 2014. We expect the new GC to be available in Go 1.5, due June 1st 2015, which will make Go appealing for a broader range of applications. We can't wait to see where people take it.

And there will be more great events, with GothamGo in New York (15 Nov), another Go DevRoom at FOSDEM in Brussels (Jan 31 and Feb 1; get involved!), GopherCon India in Bengaluru (19-21 Feb), the original GopherCon back at Denver in July, and dotGo on again at Paris in November.

The Go team would like to extend its thanks to all the gophers out there. Here's to the next five years.

To celebrate 5 years of Go, over the coming month the Gopher Academy will publish a series of articles by prominent Go users. Be sure to check out their blog for more Go action.

over a year ago

Rust Lang News - Oct 30

Stability as a Deliverable

The upcoming Rust 1.0 release means a lot, but most fundamentally it is a commitment to stability, alongside our long-running commitment to safety.

Starting with 1.0, we will move to a 6-week release cycle and a menu of release “channels”. The stable release channel will provide pain-free upgrades, and the nightly channel will give early adopters access to unfinished features as we work over a year ago

The upcoming Rust 1.0 release means a lot, but most fundamentally it is a commitment to stability, alongside our long-running commitment to safety.

Starting with 1.0, we will move to a 6-week release cycle and a menu of release “channels”. The stable release channel will provide pain-free upgrades, and the nightly channel will give early adopters access to unfinished features as we work on them.

Committing to stability

Since the early days of Rust, there have only been two things you could count on: safety, and change. And sometimes not the first one. In the process of developing Rust, we’ve encountered a lot of dead ends, and so it’s been essential to have the freedom to change the language as needed.

But Rust has matured, and core aspects of the language have been steady for a long time. The design feels right. And there is a huge amount of pent up interest in Rust, waiting for 1.0 to ship so that there is a stable foundation to build on.

It’s important to be clear about what we mean by stable. We don’t mean that Rust will stop evolving. We will release new versions of Rust on a regular, frequent basis, and we hope that people will upgrade just as regularly. But for that to happen, those upgrades need to be painless.

To put it simply, our responsibility is to ensure that you never dread upgrading Rust. If your code compiles on Rust stable 1.0, it should compile with Rust stable 1.x with a minimum of hassle.

The plan

We will use a variation of the train model, first introduced in web browsers and now widely used to provide stability without stagnation:

  • New work lands directly in the master branch.

  • Each day, the last successful build from master becomes the new nightly release.

  • Every six weeks, a beta branch is created from the current state of master, and the previous beta is promoted to be the new stable release.

In short, there are three release channels – nightly, beta, and stable – with regular, frequent promotions from one channel to the next.

New features and new APIs will be flagged as unstable via feature gates and stability attributes, respectively. Unstable features and standard library APIs will only be available on the nightly branch, and only if you explicitly “opt in” to the instability.

The beta and stable releases, on the other hand, will only include features and APIs deemed stable, which represents a commitment to avoid breaking code that uses those features or APIs.

The FAQ

There are a lot of details involved in the above process, and we plan to publish RFCs laying out the fine points. The rest of this post will cover some of the most important details and potential worries about this plan.

What features will be stable for 1.0?

We’ve done an analysis of the current Rust ecosystem to determine the most used crates and the feature gates they depend on, and used this data to guide our stabilization plan. The good news is that the vast majority of what’s currently being used will be stable by 1.0:

  • There are several features that are nearly finished already: struct variants, default type parameters, tuple indexing, and slicing syntax.

  • There are two key features that need significant more work, but are crucial for 1.0: unboxed closures and associated types.

  • Finally, there are some widely-used features with flaws that cannot be addressed in the 1.0 timeframe: glob imports, macros, and syntax extensions. This is where we have to make some tough decisions.

After extensive discussion, we plan to release globs and macros as stable at 1.0. For globs, we believe we can address problems in a backwards-compatible way. For macros, we will likely provide an alternative way to define macros (with better hygiene) at some later date, and will incrementally improve the “macro rules” feature until then. The 1.0 release will stabilize all current macro support, including import/export.

On the other hand, we cannot stabilize syntax extensions, which are plugins with complete access to compiler internals. Stabilizing it would effectively forever freeze the internals of the compiler; we need to design a more deliberate interface between extensions and the compiler. So syntax extensions will remain behind a feature gate for 1.0.

Many major uses of syntax extensions could be replaced with traditional code generation, and the Cargo tool will soon be growing specific support for this use case. We plan to work with library authors to help them migrate away from syntax extensions prior to 1.0. Because many syntax extensions don’t fit this model, we also see stabilizing syntax extensions as an immediate priority after the 1.0 release.

What parts of the standard library will be stable for 1.0?

We have been steadily stabilizing the standard library, and have a plan for nearly all of the modules it provides. The expectation is that the vast majority of functionality in the standard library will be stable for 1.0. We have also been migrating more experimental APIs out of the standard library and into their own crates.

What about stability attributes outside of the standard library?

Library authors can continue to use stability attributes as they do today to mark their own stability promises. These attributes are not tied into the Rust release channels by default. That is, when you’re compiling on Rust stable, you can only use stable APIs from the standard library, but you can opt into experimental APIs from other libraries. The Rust release channels are about making upgrading Rust itself (the compiler and standard library) painless.

Library authors should follow semver; we will soon publish an RFC defining how library stability attributes and semver interact.

Why not allow opting in to instability in the stable release?

There are three problems with allowing unstable features on the stable release.

First, as the web has shown numerous times, merely advertising instability doesn’t work. Once features are in wide use it is very hard to change them – and once features are available at all, it is very hard to prevent them from being used. Mechanisms like “vendor prefixes” on the web that were meant to support experimentation instead led to de facto standardization.

Second, unstable features are by definition work in progress. But the beta/stable snapshots freeze the feature at scheduled points in time, while library authors will want to work with the latest version of the feature.

Finally, we simply cannot deliver stability for Rust unless we enforce it. Our promise is that, if you are using the stable release of Rust, you will never dread upgrading to the next release. If libraries could opt in to instability, then we could only keep this promise if all library authors guaranteed the same thing by supporting all three release channels simultaneously.

It’s not realistic or necessary for the entire ecosystem to flawlessly deal with these problems. Instead, we will enforce that stable means stable: the stable channel provides only stable features.

Won’t this split the ecosystem? Will everyone use nightly at 1.0?

It doesn’t split the ecosystem: it creates a subset. Programmers working with the nightly release channel can freely use libraries that are designed for the stable channel. There will be pressure to stabilize important features and APIs, and so the incentives to stay in the unstable world will shrink over time.

We have carefully planned the 1.0 release so that the bulk of the existing ecosystem will fit into the “stable” category, and thus newcomers to Rust will immediately be able to use most libraries on the stable 1.0 release.

What are the stability caveats?

We reserve the right to fix compiler bugs, patch safety holes, and change type inference in ways that may occasionally require new type annotations. We do not expect any of these changes to cause headaches when upgrading Rust.

The library API caveats will be laid out in a forthcoming RFC, but are similarly designed to minimize upgrade pain in practice.

Will Rust and its ecosystem continue their rapid development?

Yes! Because new work can land on master at any time, the train model doesn’t slow down the pace of development or introduce artificial delays. Rust has always evolved at a rapid pace, with lots of help from amazing community members, and we expect this will only accelerate.

over a year ago

Idris Lang News - Oct 26

Idris 0.9.15.1 released

A new version of Idris, 0.9.15.1, has been released. You can find this on hackage, or from the download page. The tutorial has also been updated. Thanks as always to everyone who has contributed, either with code, documentation or by testing and reporting issues. You can find the names of all contributors in the CONTRIBUTORS [...] over a year ago

A new version of Idris, 0.9.15.1, has been released. You can find this on hackage, or from the download page.

The tutorial has also been updated.

Thanks as always to everyone who has contributed, either with code, documentation or by testing and reporting issues. You can find the names of all contributors in the CONTRIBUTORS file in the source repository.

Idris is still research software, so we expect there to be bugs. Please don’t keep them to yourself! If you find any problem, please report it on the github issue tracker. It is very helpful if you can also provide a small test case which reproduces your problem.

Also, do let us know how you get on in general either via the mailing list or the #idris channel on irc.freenode.net.

The main new features and updates are:

  • Naming of data and type constructors is made consistent across the standard library.
    • Note: This change will break existing code!
    • Data and Type Constructors now consistently begin with a capital letter, which means in particular we have Refl, FZ and FS for refl, fZ and fS.
    • The empty type is now called Void, and its eliminator void.
  • EXPERIMENTAL support for Uniqueness Types
  • EXPERIMENTAL support for Partial Evaluation, as described in Scrapping your Inefficient Engine.
  • The Java and LLVM backends have been factored out into separate projects, idris-java and idris-llvm.

There are several minor updates and improvements:

  • More efficient representation of the proof state, leading to faster elaboration of large expressions.
  • Two new tactics: skip and fail. skip does nothing, and fail takes a string as an argument and produces it as an error, with corresponding reflected tactics.
  • Improved display in the interactive prover.
  • Unary negation now desugars to `negate`, which is a method of the `Neg` type class. This allows instances of `Num` that can’t be negative, like `Nat`, and it makes correct IEEE Float operations easier to encode. Additionally, unary negation is now available to DSL authors.
  • New REPL command `:printdef` displays the internal definition of a name.
  • New REPL command `:pprint` pretty-prints a definition or term with LaTeX or HTML highlighting.
  • Terms in code blocks in documentation strings are now parsed and type checked. If this succeeds, they are rendered in full colour in documentation lookups, and with semantic highlighting for IDEs.
  • Fenced code blocks in docs defined with the “example” attribute are rendered as code examples.
  • Fenced code blocks declared to be Idris code that fail to parse or type check now provide error messages to IDE clients.
over a year ago

Swift Lang News - Oct 20

Failable Initializers

Swift version 1.1 is new in Xcode 6.1 , and it introduces a new feature: failable initializers. Initialization is the process of providing initial values to each of the stored properties of a class or struct , establishing the invariants of the object. In some cases initialization can fail. For example, initializing the object requires access to a resource, such as loading an image from a file over a year ago

Swift version 1.1 is new in Xcode 6.1, and it introduces a new feature: failable initializers. Initialization is the process of providing initial values to each of the stored properties of a class or struct, establishing the invariants of the object. In some cases initialization can fail. For example, initializing the object requires access to a resource, such as loading an image from a file:

[view code in blog]

If the file does not exist or is unreadable for any reason, the initialization of the NSImage will fail. With Swift version 1.1, such failures can be reported using a failable initializer. When constructing an object using a failable initializer, the result is an optional that either contains the object (when the initialization succeeded) or contains nil (when the initialization failed). Therefore, the initialization above should handle the optional result directly:

[view code in blog]

An initializer defined with init can be made failable by adding a ? or a ! after the init, which indicates the form of optional that will be produced by constructing an object with that initializer. For example, one could add a failable initializer to Int that attempts to perform a conversion from a String:

[view code in blog]

In a failable initializer, return nil indicates that initialization has failed; no other value can be returned. In the example, failure occurs when the string could not be parsed as an integer. Otherwise, self is initialized to the parsed value.

Failable initializers eliminate the most common reason for factory methods in Swift, which were previously the only way to report failure when constructing this object. For example, enums that have a raw type provided a factory method fromRaw that returned an optional enum. Now, the Swift compiler synthesizes a failable initializer that takes a raw value and attempts to map it to one of the enum cases. For example:

[view code in blog]

Using the failable initializer allows greater use of Swift’s uniform construction syntax, which simplifies the language by eliminating the confusion and duplication between initializers and factory methods. Along with the introduction of failable initializers, Swift now treats more Cocoa factory methods — those with NSError arguments — as initializers, providing a more uniform experience for object construction.

You can read more about failable initializers in The Swift Programming Language.

over a year ago

Swift Lang News - Oct 07

Building Your First Swift App Video

UPDATE: To make it easier to follow along, we’ve included the code you see pasted in the video. So far the Swift blog has focused on advanced programming topics, including the design principles of the Swift language. We thought it would be helpful to provide content for programmers who are new to Swift and just trying Xcode for the first time. To make it more approachable for everyone, we put t over a year ago

UPDATE: To make it easier to follow along, we’ve included the code you see pasted in the video.

So far the Swift blog has focused on advanced programming topics, including the design principles of the Swift language. We thought it would be helpful to provide content for programmers who are new to Swift and just trying Xcode for the first time. To make it more approachable for everyone, we put together a very short video that demonstrates how to build an iOS app in Swift from scratch, in less than ten minutes.

Watch the video

over a year ago

Lambda the Ultimate - Oct 05

CFP: Off-the-Beaten-Track (OBT) workshop at POPL 2015

Announcing the 2015 edition of the OBT workshop, to be co-located with POPL 2015, in Mumbai, India. Two-page paper submissions are due November 7, 2014.

From the web page (www.cs.rice.edu/~sc40/obt15/):

Programming language researchers have the principles, tools, algorithms and abstractions to solve all kinds of problems, in all areas of computer science. However, iden over a year ago

Announcing the 2015 edition of the OBT workshop, to be co-located with POPL 2015, in Mumbai, India. Two-page paper submissions are due November 7, 2014.

From the web page (http://www.cs.rice.edu/~sc40/obt15/):

Programming language researchers have the principles, tools, algorithms and abstractions to solve all kinds of problems, in all areas of computer science. However, identifying and evaluating new problems, particularly those that lie outside the typical core PL problems we all know and love, can be a significant challenge. This workshop's goal is to identify and discuss problems that do not often show up in our top conferences, but where programming language research can make a substantial impact. We hope fora like this will increase the diversity of problems that are studied by PL researchers and thus increase our community's impact on the world.

While many workshops associated with POPL have become more like mini-conferences themselves, this is an anti-goal for OBT. The workshop will be informal and structured to encourage discussion. We are at least as interested in problems as in solutions.

over a year ago

Swift Lang News - Dec 12

What Happened to NSMethodSignature ?

UPDATE: We’ve added the Request.playground file to this post so you can download it and easily experiment with the code yourself. Bringing the Cocoa frameworks to Swift gave us a unique opportunity to look at our APIs with a fresh perspective. We found classes that we didn't feel fit with the goals of Swift, most often due to the priority we give to safety. For instance, some classes related to over a year ago

UPDATE: We’ve added the Request.playground file to this post so you can download it and easily experiment with the code yourself.

Bringing the Cocoa frameworks to Swift gave us a unique opportunity to look at our APIs with a fresh perspective. We found classes that we didn't feel fit with the goals of Swift, most often due to the priority we give to safety. For instance, some classes related to dynamic method invocation are not exposed in Swift, namely NSInvocation and NSMethodSignature.

We recently received a bug report from a developer who noticed this absence. This developer was using NSMethodSignature in Objective-C to introspect the types of method arguments, and in the process of migrating this code to Swift, noticed that NSMethodSignature is not available. The code being migrated could accept HTTP handlers of varying signatures, such as:

[view code in blog]

In Objective-C, NSMethodSignature can be used to determine that the API of the first method would require a [String: String] argument, and the second method would require a JSON value. However, Swift is a powerful language and can easily handle this scenario without using NSMethodSignature, and in a way that doesn't undermine the help that the compiler provides for type and memory safety.

Here is an alternative way to solve the same problem in Swift:

[view code in blog]

First, we'll use a protocol to define that whatever is going to handle our HTTPRequest does so via this interface. This protocol is very simple, with only a single method.

Why use a protocol here, instead of subclassing an HTTPHandler class? Because protocols give the flexibility of leaving the implementation details up to the clients of this code. If we were to make an HTTPHandler class, we would require clients to also use classes, forcing upon them the semantics of reference types. However, by using a protocol, clients can decide for themselves the appropriate type to use in their code, whether it be class, struct, or even enum.

[view code in blog]

Next, our HTTPServer class has a generic method that accepts an HTTPHandlerType as a parameter. By using the handler's associated type, it can perform the conditional downcast of the args parameter to determine if this handler should be given an opportunity to handle the request. Here we can see the benefit of defining HTTPHandlerType as a protocol. The HTTPServer doesn't need to know how the handler is reacting to the request, nor does it even need to care about the nature of the handler itself. All it needs to know is that the value can handle requests.

[view code in blog]

When our HTTPServer receives a request, it will iterate through its handlers and see if any can deal with the request.

Now we can easily create a custom HTTPHandlerType with varying argument types and register it with the HTTPServer:

[view code in blog]

With a combination of protocols and generics, we have written Swift code to elegantly create and register HTTP handlers of varying types. This approach also lets the compiler guarantee type safety, while ensuring excellent runtime performance.

over a year ago

Rust Lang News - Dec 12

Rust 1.0: Scheduling the trains

As 2014 is drawing to a close, it’s time to begin the Rust 1.0 release cycle!

TL;DR: we will transition to a six week release cycle on Jan 9, 2015, and produce Rust 1.0.0 final at least two cycles afterwards:

  • Rust 1.0.0-alpha – Friday, Jan 9, 2015
  • Rust 1.0.0-beta1 – Week of Feb 16, 2015
  • Rust 1.0.0 – One or more six-week cycles later

We over a year ago

As 2014 is drawing to a close, it’s time to begin the Rust 1.0 release cycle!

TL;DR: we will transition to a six week release cycle on Jan 9, 2015, and produce Rust 1.0.0 final at least two cycles afterwards:

  • Rust 1.0.0-alpha – Friday, Jan 9, 2015
  • Rust 1.0.0-beta1 – Week of Feb 16, 2015
  • Rust 1.0.0 – One or more six-week cycles later

We talked before about why Rust is reaching 1.0, and also about the 6-week train model (with Nightly, Beta, and Stable channels) that will enable us to deliver stability without stagnation. This post finishes the story by laying out the transition to this new release model and the stability guarantees it provides.

The alpha release

Reaching alpha means three things:

  • The language is feature-complete. All gates are removed from features we expect to ship with 1.0.

  • The standard library is nearly feature-complete. The majority of APIs that will ship in 1.0 stable will already be marked as #[stable].

  • Warnings for #[unstable] features are turned on by default. (Note that the #[experimental] stability level is going away.)

In other words, 1.0.0-alpha gives a pretty accurate picture of what 1.0 will look like, but doesn’t yet institute release channels. By turning on warnings for unstable APIs but not excluding them altogether, we can get community feedback about which important APIs still need to be stabilized without those APIs simply disappearing over night.

While we expect the pace of breakage to slow dramatically when we reach feature-complete status, 1.0.0-alpha is still a pre-release:

A pre-release version indicates that the version is unstable and might not
satisfy the intended compatibility requirements as denoted by its associated
normal version.

That is, we will reserve the right to make minor breaking changes to both the language and libraries – including #[stable] APIs – throughout the duration of the alpha cycle. But we expect any such changes to be relatively minor tweaks, and changes to #[stable] APIs to be very rare.

The beta release(s)

Six weeks later, we will begin the beta period:

  • Both the language and libraries are feature-complete. All APIs shipping for Rust 1.0 are marked #[stable].

  • Release channels take effect: feature gates and #[unstable] APIs are available on nightly builds, but not on the beta. This change is part of our commitment to stability.

Unlike the alpha cycle, where we still expect some minor breakage, the beta cycle should not involve breakage unless a very significant problem is found. Ideally, the beta cycle will be focused on testing, bugfixing, and polish.

We plan to run at least one beta cycle before the final release.

The final release

Finally, after one or more beta cycles, we will have produced a release candidate that is ready for the world:

  • We are ready to promise stability – hassle-free upgrades – for the duration of the 1.X series.

  • The core documentation (The Guide/Guides) is fully in sync with the language and libraries.

We are incredibly excited for Rust to reach this point.

What this means for the ecosystem

With the launch of Cargo and crates.io, Rust’s ecosystem has already seen significant expansion, but it still takes a lot of work to track Rust’s nightly releases. Beginning with the alpha release, and especially approaching beta1, this will change dramatically; code that works with beta1 should work with 1.0 final without any changes whatsoever.

This migration into stability should be a boon for library writers, and we hope that by 1.0 final there will be a massive collection of crates ready for use on the stable channel – and ready for the droves of people trying out Rust for the first time.

Let’s do this!

over a year ago

Go Lang News - Dec 10

Go 1.4 is released

Today we announce Go 1.4, the fifth major stable release of Go, arriving six months after our previous major release Go 1.3. It contains a small language change, support for more operating systems and processor architectures, and improvements to the tool chain and libraries. As always, Go 1.4 keeps the promise of compatibility, and almost everything will continue to compile and run without change w over a year ago

Today we announce Go 1.4, the fifth major stable release of Go, arriving six months after our previous major release Go 1.3. It contains a small language change, support for more operating systems and processor architectures, and improvements to the tool chain and libraries. As always, Go 1.4 keeps the promise of compatibility, and almost everything will continue to compile and run without change when moved to 1.4. For the full details, see the Go 1.4 release notes.

The most notable new feature in this release is official support for Android. Using the support in the core and the libraries in the golang.org/x/mobile repository, it is now possible to write simple Android apps using only Go code. At this stage, the support libraries are still nascent and under heavy development. Early adopters should expect a bumpy ride, but we welcome the community to get involved.

The language change is a tweak to the syntax of for-range loops. You may now write "for range s {" to loop over each item from s, without having to assign the value, loop index, or map key. See the release notes for details.

The go command has a new subcommand, go generate, to automate the running of tools to generate source code before compilation. For example, it can be used to automate the generation of String methods for typed constants using the new stringer tool. For more information, see the design document.

Most programs will run about the same speed or slightly faster in 1.4 than in 1.3; some will be slightly slower. There are many changes, making it hard to be precise about what to expect. See the release notes for more discussion.

And, of course, there are many more improvements and bug fixes.

In case you missed it, a few weeks ago the sub-repositories were moved to new locations. For example, the go.tools packages are now imported from "golang.org/x/tools". See the announcement post for details.

This release also coincides with the project's move from Mercurial to Git (for source control), Rietveld to Gerrit (for code review), and Google Code to Github (for issue tracking and wiki). The move affects the core Go repository and its sub-repositories. You can find the canonical Git repositories at go.googlesource.com, and the issue tracker and wiki at the golang/go GitHub repo.

While development has already moved over to the new infrastructure, for the 1.4 release we still recommend that users who install from source use the Mercurial repositories.

For App Engine users, Go 1.4 is now available for beta testing. See the announcement for details.

From all of us on the Go team, please enjoy Go 1.4, and have a happy holiday season.

over a year ago

Lambda the Ultimate - Nov 22

Zélus : A Synchronous Language with ODEs

Zélus : A Synchronous Language with ODEs
Timothy Bourke, Marc Pouzet
2013

Zélus is a new programming language for modeling systems that mix discrete logical time and continuous time behaviors. From a user's perspective, its main originality is to extend an existing Lustre-like synchronous language with Ordinary Differential Equations (ODEs). The extension is conse over a year ago

Zélus : A Synchronous Language with ODEs
Timothy Bourke, Marc Pouzet
2013

Zélus is a new programming language for modeling systems that mix discrete logical time and continuous time behaviors. From a user's perspective, its main originality is to extend an existing Lustre-like synchronous language with Ordinary Differential Equations (ODEs). The extension is conservative: any synchronous program expressed as data-flow equations and hierarchical automata can be composed arbitrarily with ODEs in the same source code.

A dedicated type system and causality analysis ensure that all discrete changes are aligned with zero-crossing events so that no side effects or discontinuities occur during integration. Programs are statically scheduled and translated into sequential code that, by construction, runs in bounded time and space. Compilation is effected by source-to-source translation into a small synchronous subset which is processed by a standard synchronous compiler architecture. The resultant code is paired with an off-the-shelf numeric solver.

We show that it is possible to build a modeler for explicit hybrid systems à la Simulink/Stateflow on top of an existing synchronous language, using it both as a semantic basis and as a target for code generation.

Synchronous programming languages (à la Lucid Synchrone) are language designs for reactive systems with discrete time. Zélus extends them gracefully to hybrid discrete/continuous systems, to interact with the physical world, or simulate it -- while preserving their strong semantic qualities.

The paper is short (6 pages) and centered around examples rather than the theory -- I enjoyed it. Not being familiar with the domain, I was unsure what the "zero-crossings" mentioned in the introductions are, but there is a good explanation further down in the paper:

The standard way to detect events in a numeric solver is via zero-crossings where a solver monitors expressions for changes in sign and then, if they are detected, searches for a more precise instant of crossing.

The Zélus website has a 'publications' page with more advanced material, and an 'examples' page with case studies.

over a year ago

Lambda the Ultimate - Nov 18

Facebook releases "Flow", a statically typed JavaScript variant

The goal of Flow is to find errors in JavaScript code with little programmer effort. Flow relies heavily on type inference to find type errors even when the program has not been annotated - it precisely tracks the types of variables as they flow through the program.

At the same time, Flow is a gradual type system. Any parts of your program that are dynamic in nature can easil over a year ago

The goal of Flow is to find errors in JavaScript code with little programmer effort. Flow relies heavily on type inference to find type errors even when the program has not been annotated - it precisely tracks the types of variables as they flow through the program.

At the same time, Flow is a gradual type system. Any parts of your program that are dynamic in nature can easily bypass the type checker, so you can mix statically typed code with dynamic code.

Flow also supports a highly expressive type language. Flow types can express much more fine-grained distinctions than traditional type systems. For example, Flow helps you catch errors involving null, unlike most type systems.

Read more here.
Here's the announcement from Facebook.

over a year ago

Swift Lang News - Nov 11

Introduction to the Swift REPL

Xcode 6.1 introduces yet another way to experiment with Swift in the form of an interactive Read Eval Print Loop, or REPL. Developers familiar with interpreted languages will feel comfortable in this command-line environment, and even experienced developers will find a few unique features. To get started, launch Terminal.app (found in /Applications/Utilities) and type “swift” at the prompt in OS X over a year ago

Xcode 6.1 introduces yet another way to experiment with Swift in the form of an interactive Read Eval Print Loop, or REPL. Developers familiar with interpreted languages will feel comfortable in this command-line environment, and even experienced developers will find a few unique features. To get started, launch Terminal.app (found in /Applications/Utilities) and type “swift” at the prompt in OS X Yosemite, or “xcrun swift” in OS X Mavericks. You’ll then be in the Swift REPL:[view code in blog]

All you need to do is type Swift statements and the REPL will immediately execute your code. Expression results are automatically formatted and displayed along with their type, as are the results of both variable and constant declarations. Console output flows naturally within the interactive session:

[view code in blog]

Note that the result from line one has been given a name by the REPL even though the result of the expression wasn’t explicitly assigned to anything. You can reference these results to reuse their values in subsequent statements:

[view code in blog]

The Swift compiler recognizes incomplete code, and will prompt for additional input when needed. Your code will even be indented automatically as it would in Xcode. For instance, starting a function:

[view code in blog]

The prompt for continuation lines is a line number followed by a period instead of the angle bracket that indicates a new statement, so you can tell at a glance when you’re being asked to complete a code fragment. At this point you can keep typing remaining lines in the method:

[view code in blog]

There are three noteworthy points to make here: The first is that line six was originally indented, but the REPL automatically unindented when we typed the closing brace. The second is that the function references a parameter we forgot to declare and needs a return type, so you’ll need to add both to the declaration. The last is that even if you did press return after the last line, it’s not too late to fix it.

Multi-Line History

When code is submitted to the compiler it’s also recorded in the REPL history, which makes correcting mistakes trivial. If you pressed return at the end of the incomplete function declaration above, you’d be presented with the following message:

[view code in blog]

Like most history implementations, you can call up your last entry by pressing up arrow from the prompt. The REPL brings back all three lines in our example, and places the cursor at the end. You can now proceed with editing the code to correct your mistake as described in the next section.

Your history is preserved between sessions and will record hundreds of code fragments. Each time you move up from the top line you’ll move to an earlier history entry. Each time you move down from an empty line at the bottom of an entry you’ll move to a more recent history entry. The empty line that opens up before moving to the next entry comes in handy for reasons discussed below.

Multi-Line Editing

Even though the REPL behaves like a traditional line editor, it also provides convenient features for dealing with multi-line input like most class or function declarations. In the example above, before pressing return on the final line you can press up arrow to move the cursor up to the declaration line, then use the left arrow to move the cursor just after the opening parenthesis for the parameter list:

[view code in blog]

Type the parameter declaration, press the right arrow to move past the closing parenthesis and add the return type as well:

[view code in blog]

You can’t press return to complete the declaration at this point because you’re in the middle of a block of text. Pressing return here would insert a line break, which can be useful if you’re trying to insert additional lines in a function or method body, but what you want here is to move to the end of the declaration. You can press down arrow twice to get there, or use the Emacs sequence ESC > (the escape key followed by a closing angle bracket). Pressing return at the end of the last line will compile the newly declared function so it’s ready for use:

[view code in blog]

Automatic detection of statement completion means that you can just type code and the REPL will do the right thing the vast majority of the time. There are occasions, however, where it’s necessary to submit more than one declaration at the same time because they have mutual dependencies. Consider the following code:

[view code in blog]

Typing everything above line by line will result in trying to compile the first function once the third line is complete, and of course this produces an error:

[view code in blog]

You could declare both functions on a single line to get around automatic completion detection that takes place when you press return, but there’s a better solution. After typing the third line above you can press the down arrow to move to create a fourth line manually, and type the remainder normally. The two declarations are compiled together, achieving the desired goal of mutual recursion.

Quick Reference

To help you get started, here’s a handy chart with some of the most commonly used editing and navigation keys:

[view code in blog] over a year ago

Lambda the Ultimate - Nov 05

Why do we need modules at all?

Post by Joe Armstrong of Erlang fame. Leader:

Why do we need modules at all? This is a brain-dump-stream-of-consciousness-thing. I've been thinking about this for a while. I'm proposing a slightly different way of programming here. The basic idea is:

  • do away with modules
  • all functions have unique distinct names
  • all functions have (lots of) meta data over a year ago

Post by Joe Armstrong of Erlang fame. Leader:

Why do we need modules at all? This is a brain-dump-stream-of-consciousness-thing. I've been thinking about this for a while. I'm proposing a slightly different way of programming here. The basic idea is:

  • do away with modules
  • all functions have unique distinct names
  • all functions have (lots of) meta data
  • all functions go into a global (searchable) Key-value database
  • we need letrec
  • contribution to open source can be as simple as contributing a single function
  • there are no "open source projects" - only "the open source Key-Value database of all functions"
  • Content is peer reviewed

Why does Erlang have modules? There's a good an bad side to modules. Good: Provides a unit of compilation, a unit of code distribution. unit of code replacement. Bad: It's very difficult to decide which module to put an individual function in. Break encapsulation (see later).

over a year ago

Lambda the Ultimate - Oct 28

Conservation laws for free!

In this year's POPL, Bob Atkey made a splash by showing how to get from parametricity to conservation laws, via Noether's theorem:

Invariance is of paramount importance in programming languages and in physics. In programming languages, John Reynolds’ theory of relational parametricity demonstrates that parametric polymorphic programs are invariant under change of data repr over a year ago

In this year's POPL, Bob Atkey made a splash by showing how to get from parametricity to conservation laws, via Noether's theorem:

Invariance is of paramount importance in programming languages and in physics. In programming languages, John Reynolds’ theory of relational parametricity demonstrates that parametric polymorphic programs are invariant under change of data representation, a property that yields “free” theorems about programs just from their types. In physics, Emmy Noether showed that if the action of a physical system is invariant under change of coordinates, then the physical system has a conserved quantity: a quantity that remains constant for all time. Knowledge of conserved quantities can reveal deep properties of physical systems. For example, the conservation of energy, which by Noether’s theorem is a consequence of a system’s invariance under time-shifting.

In this paper, we link Reynolds’ relational parametricity with Noether’s theorem for deriving conserved quantities. We propose an extension of System Fω with new kinds, types and term constants for writing programs that describe classical mechanical systems in terms of their Lagrangians. We show, by constructing a relationally parametric model of our extension of Fω, that relational parametricity is enough to satisfy the hypotheses of Noether’s theorem, and so to derive conserved quantities for free, directly from the polymorphic types of Lagrangians expressed in our system.

over a year ago

Lambda the Ultimate - Oct 22

Seemingly impossible programs

In case this one went under the radar, at POPL'12, Martín Escardó gave a tutorial on seemingly impossible functional programs:

Programming language semantics is typically applied to
prove compiler correctness and allow (manual or automatic) program
verification. Certain kinds of semantics can also be applied to
discover programs that one wouldn't have otherwise th over a year ago

In case this one went under the radar, at POPL'12, Martín Escardó gave a tutorial on seemingly impossible functional programs:

Programming language semantics is typically applied to
prove compiler correctness and allow (manual or automatic) program
verification. Certain kinds of semantics can also be applied to
discover programs that one wouldn't have otherwise thought of. This is
the case, in particular, for semantics that incorporate topological
ingredients (limits, continuity, openness, compactness). For example,
it turns out that some function types (X -> Y) with X infinite (but
compact) do have decidable equality, contradicting perhaps popular
belief, but certainly not (higher-type) computability theory. More
generally, one can often check infinitely many cases in finite time.

I will show you such programs, run them fast in surprising instances,
and introduce the theory behind their derivation and working. In
particular, I will study a single (very high type) program that (i)
optimally plays sequential games of unbounded length, (ii) implements
the Tychonoff Theorem from topology (and builds finite-time search
functions for infinite sets), (iii) realizes the double-negation shift
from proof theory (and allows us to extract programs from classical
proofs that use the axiom of countable choice). There will be several
examples in the languages Haskell and Agda.

A shorter version (coded in Haskell) appears in Andrej Bauer's blog.

over a year ago

Lambda the Ultimate - Oct 12

EATCS Award 2014: Gordon Plotkin

Gordon Plotkin is renowned for his groundbreaking contributions to programming language semantics, which have helped to shape the landscape of theoretical computer science, and which have im-pacted upon the design of programming languages and their verification technologies. The in-fluence of his pioneering work on logical frameworks pervades modern proof technologies. In addition, over a year ago

Gordon Plotkin is renowned for his groundbreaking contributions to programming language semantics, which have helped to shape the landscape of theoretical computer science, and which have im-pacted upon the design of programming languages and their verification technologies. The in-fluence of his pioneering work on logical frameworks pervades modern proof technologies. In addition, he has made outstanding contributions in machine learning, automated theorem prov-ing, and computer-assisted reasoning. He is still active in research at the topmost level, with his current activities placing him at the forefront of fields as diverse as programming semantics, applied logic, and systems biology.

Well deserved, of course. Congrats!

over a year ago

Go Lang News - Oct 06

Go at Google I/O and Gopher SummerFest

The week of June 23rd was a good week for gophers in San Francisco. Go was a big part of Google I/O on Wednesday and Thursday, and on Monday we took advantage of the large gopher population to run the Go SummerFest, a special instance of the GoSF meetup. This blog post is a recap of both events. over a year ago

Introduction

The week of June 23rd was a good week for gophers in San Francisco. Go was a big part of Google I/O on Wednesday and Thursday, and on Monday we took advantage of the large gopher population to run the Go SummerFest, a special instance of the GoSF meetup. This blog post is a recap of both events.

Gopher SummerFest

On the Monday, more than 200 gophers gathered at the Google office in San Francisco to hear a series of talks:

  • The State of Go, (slides and video) by Andrew Gerrand.
  • I was wrong, again! (slides and video), by Derek Collison.
  • Go at Splice (slides), by Matt Aimonetti
  • Quick testing with quick (slides), by Evan Shaw
  • Something about Go (no slides), by Blake Mizerany.

More comments and pictures from the event are available on the meetup event page.

Go at Google I/O

On the Wednesday and Thursday, Go was at Google I/O in two different formats: the Go booth in the sandbox area and the Go code labs available in the code lab area and all around the world through I/O Extended.

The Go booth

The Go booth was part of the Developer Sandbox area.

For the two days of the conference, some gophers from Google and other companies gave a series of talks and demonstrations. The talks were not recorded, but the slides and some screencasts and blog posts will be shared soon.

  • Organizing Go Code, by David Crawshaw. (slides)
  • Testing Techniques, by Andrew Gerrand. (video and slides)
  • Go for Java Developers, by Francesc Campoy. (slides)
  • Camlistore: Android, ARM, App Engine, Everywhere, by Brad Fitzpatrick. (slides)
  • Go Compilation Complexities, by Ian Lance Taylor. (slides)
  • SourceGraph: a Code Search Engine in Go, by Quinn Slack. (video and slides)

We also organized Q&A sessions and lightning talks by members of the Go community:

The Go code lab

This year attendees of Google I/O had a code lab area with self-service computers where they could sit and learn Go. The code labs were also available to anyone through the Google I/O extended brand. You can try it yourself at io2014codelabs.appspot.com.

Conclusion

Thanks to the organizers, speakers, and attendees who helped make these events a great success. See you next year. (Or at dotGo this week!)

over a year ago

Lambda the Ultimate - Oct 04

Domain settings

I am about to make some changes to the name server definitions. Since changes take time to propagate, you may have trouble reaching the site for awhile. If this happens, try using the .com domain instead of the preferred .org domain.

over a year ago

I am about to make some changes to the name server definitions. Since changes take time to propagate, you may have trouble reaching the site for awhile. If this happens, try using the .com domain instead of the preferred .org domain.

over a year ago
pluto/1.3.2 - Ruby/2.0.0 (2014-11-13/x86_64-linux) on Sinatra/1.4.5 (production)