Ideas for a better programming language

When I look at the programming languages available today, it seems that all of them try to optimize execution speed or developer time at the expense of the other. For example, although compiled C code is extremely fast, it can take many more programmer hours to write a robust application in C than in a higher-level language. On the other side of the coin, a moderately complex problem can be solved by a Ruby programmer in a few minutes, but the resulting code is executed slowly, running on an interpreter (soon to be a VM, but still slower than native code) and requiring garbage collection and lots of runtime checking. Are these sacrifices necessary? I don’t think so. How is it possible to make a language that simultaneously optimizes execution speed and developer time? I believe the answer lies in static code analysis, particularly at the compiler level.

(To get a brief overview of my language ideas, see my five-point summary of a better language below.)

You may be thinking: “It takes too long to compile code; I’d much rather run an interpreter because I can develop faster.” I agree: when you are developing code, it is faster to use an interpreter. Furthermore, you can benefit from runtime safety checks that would be absent in native code. But when you are ready to distribute your code, it is much better to compile it. Unfortunately, current language implementations tend to focus on a native compiler or an interpreter/bytecode compiler, not both. For example, there are a plethora of different C compilers, but few widely-used C interpreters. The same is true for traditionally interpreted languages and their interpreters: Ruby, Python, and Java programs are seldom distributed as native code binaries because no widely-used compilers exist. A language that has both a well-supported interpreter and a compiler (ideally having reference implementations for both) allows developers to use the language both for quick development and fast execution on shipped binaries.

Then why don’t we just make a really good interpreter for C or a really good native code compiler for Ruby, Python, Java, etc.? The answer to the first part is because C programs take a long time to develop primarily due to the language’s design, not compile time. The answer to the second part is that often a true native code compiler is impossible (ie. in a language with eval that accepts arbitrary strings) or, if it is possible, the native code is necessarily slower than it would be for a functionally-equivalent C program because of language-required runtime checks and garbage collection.

To me, a better programming language would optimize both execution speed and developer time and would be easily compilable to native code and interpretable by design. As I mentioned early, I believe that these goals can be largely achieved by compiler-level static code analysis. My focus is on compiler implementation of a language rather than interpreter implementation because there are already many interpreted languages that optimize developer time well.

I probably won’t have time to implement any of my ideas in this area for a while so for now I’ll just enumerate some of them in this and upcoming blog posts. Through this, I hope to start a conversation about how we can overcome some of the traditionally-seen issues with truly compilable languages. Solutions to such issues (which I hope to write about in future posts) include:

  • time- and memory-efficient automatic dynamic memory allocation, particularly with growing and shrinking objects like strings, arrays, and hashtables
  • optimized reference counting using properties known at compile-time to reduce the number of updates required to reference counters
  • compile-time checks and optional runtime checks for bad behavior such as accessing an out-of-bounds array index or dereferencing a null pointer

The above list is not by any means exhaustive. Feel free to comment on other useful properties of developer-friendly languages that are difficult to put into a compilable language, hard to implement in a time- and speed-efficient manner, or just seldom seen in a compilable language for whatever reason.

I am also interested in designing the language to allow for easier compiler optimizations to native code. For example, using a foreach (PHP/Perl) or each (Ruby) construct to let the compiler parallelize operations or split operations along cache lines. Such optimizations are not possible to do safely in C because you cannot express “do this operation on each of these elements in any order”. Instead, you must use a loop construct, which means “do these operations in this order”, restricting the meaning of your operation further than necessary, thus reducing the ability to optimize.

To summarize, here are some of the main properties I would like to see in a better programming language:

  • Compilable into native code that is as fast or faster than functionally-equivalent C code
  • Optional runtime safety checks in the interpreter and inserted into compiled code by the compiler that can be used for developing but removed (through a flag) for speed
  • Easy-to-use constructs (such as strings, regular expressions, and hashtables) integrated into the language (like in Ruby and others)
  • Very strict typing to increase the number of safety guarantees the compiler can provide
  • Interpreter considerations, including the ability to automatically link with compiled libraries

To be fair, most of these will require longer compile times. However, by ensuring an interpreter is available, this will not be an issue as the code will only need to be compiled occasionally (and perhaps only by an automated build system so the developer will never have to wait).

I realize that I have not provided any empirical evidence that any of the current interpreted/bytecode-compiled language implementation are significantly slower than native code implementations. If I had the time and resources, I would research this myself. I encourage readers of this post to provide pointers to existing research or to perform their own original research to show where major speed improvements can be made over existing interpreted/bytecode-compiled implementations.

I hope that my suggestions and brief explanations have shown you that optimizing a language for execution speed and ease-of-use does not have to be zero-sum. With some clever designing, I believe that we can have native code speed benefits with the developer-friendliness of today’s interpreted languages.

14 Responses to “Ideas for a better programming language”


  • Maybe you could do a Master’s degree in programming languages. 🙂

    I think you want faster compile time, but more error checking? I’m not sure how to accomplish both those goals – it seems like tradeoffs are necessary.

    However, consider functional programming languages. Some of these (e.g. ML) are interpreted, in the sense that the interpreter reads a definition, checks it, then you can use it. I’ve never had issues with slow compile time from using ML, and furthermore it has very strong static checking that makes run-time errors much less frequent than in a web-based interpreted language like PHP. Now, as you may know, I don’t advocate a purely functional style, but rather a mixture of functional, object-oriented, and imperative coding styles. But perhaps this overall strategy of having a set of definitions, which are interpreted and checked each time a definition is finished being written, is a step towards the answer you’re looking for.

    By the way, I think it’s unfair to use C as an example of the problems with compiled languages. C sucks because it’s old, not because it’s compiled. I think Stroustrup’s argument regarding C++ is also worth considering, namely that if a language is closer to what the machine actually does, it’s easier to write faster code, and faster code is and always will be a consideration for some applications (e.g. tiny embedded devices still have major performance concerns today).

    Your desire for both high-level ease of development and low-level efficiency reminded me of array slicing. A construct like:

    array[2..6] := 5;

    is not only simpler to understand and write, it also opens the door for multi-processor optimizations – the copy can be done in parallel on a parallel machine. (I think this might be easier to do with an interpreted language, because it might be easier to do the optimization given the current running state of the program, but I might just be making that up. The optimization could be done either way.) Much parallelization is also possible in functional programming, because it’s a lot more common to be able to prove that two separate statements are allowed to run at the same time when there are no side effects. Apparently this is becoming a needed research area lately, because of the limit that’s been reached of about 4GHz or so on processor speeds – parallelization is pretty much the only way to further speed up instructions.

    I agree with an interpreter / compiler combination, by the way.

  • Adam Richard wrote:

    I think you want faster compile time, but more error checking? I’m not sure how to accomplish both those goals – it seems like tradeoffs are necessary.

    Perhaps I was a bit confusing. I am not expecting faster compile time from my suggestions; probably it will be slower than existing implementations due to the extra checks. But I am suggesting that developers should not have to wait for a compiler to test their changes. Rather, developers should use an interpreter to test their changes so that they can develop faster. Then, when the code is ready for wider testing, it should be compiled, which will be a relatively slow process compared to invoking the interpreter. This compile step could be done periodically by an automated build tool.

    In this way, developers can reap the benefits of speedy development without requiring that shipped code be interpreted. Am I making sense?

    I will reply to the rest of your post later.

  • I don’t think faster compile time or more error checking is a goal : I think that faster *development* and easier to understand/read/write code and faster *execution* time are/should be the goals.

  • C sucks because it’s old

    I wasn’t aware that C sucked. It sucks in some ways (ease of use), but that’s by design. It sucks from an ease-of-development standpoint because it’s low level. A new low level language would suffer as much, I think.

  • ossguy: Yeah, that’s clear now – you don’t want faster compile time, just interpretability. I’m still a bit confused as to how to make a language interpretable, though – I can see how a compiler / runtime environment would have to be designed for it, but I’m not sure what kind of high-level language constructs would lend themselves to it. Can you give examples?

    Stephen: All programming languages suck, but some suck less. I think it’s a pretty bleak and resigned attitude that says that we can’t make a better low-level language than C. Consider C++, even. It adds several useful constructs (classes, generics, exceptions, etc.) but those constructs don’t add extra overhead except where you actually use them. Hence, the language retains its low-level ability, but with increased power.

  • @Adam Hmm, perhaps we can make a better low-level language, but I think that in general high-level languages are better. If one can optimize high-level language compilers to be fast like low-level output, I think that would be awesome. I also disagree that C++ > C, but that’s a matter of taste 🙂

  • Adam Richard wrote:

    I’m still a bit confused as to how to make a language interpretable, though – I can see how a compiler / runtime environment would have to be designed for it, but I’m not sure what kind of high-level language constructs would lend themselves to it. Can you give examples?

    I haven’t put a whole lot of thought into that, mainly because I didn’t think it would be an issue. To me it seems that anything compilable should be interpretable because an interpreter only adds capabilities (such as eval) but does not take any away (even directly accessing hardware could theoretically be done through an interpreter). But it’s quite possible I’m missing something. Are there certain features of compilable languages that could make their code uninterpretable?

    I think it’s a pretty bleak and resigned attitude that says that we can’t make a better low-level language than C.

    I’m not sure what Stephen meant exactly; perhaps there is disagreement on the meaning of “low-level”.

    In case I was unclear on this point, I am hoping that the language I described could be used as a replacement for low-level languages (low-level in the sense that they can access hardware easily and that their execution speed is about as close hand-coded assembly as you can get). Ideally there should be no reason to use C at all, except to link with existing C libraries. I realize that some of my suggestions (like using reference counting) may seem to make the language necessarily produce slower code. But I hope that these mechanisms will only need to be used in rare cases. As I think more about the language and people provide me with examples of things that need particular considerations (like reference counting or runtime checks), I’ll have a better idea of which languages features cannot be satisfied with C-like execution speed.

  • Are there certain features of compilable languages that could make their code uninterpretable?

    Hmm, what about declaration after use? If the interpreter sees a symbol, but it may appear anywhere in the program, then it would have to parse the whole rest of the program before being able to know which symbol is being referred to.

    double x;

    class C {
    function f() {
    x := 1; // an interpreter doesn’t know about the below x yet
    }
    int x; //should shadow the global definition
    };

    Maybe there are other things whose meaning depends on knowledge of the whole program as well (although we generally try to avoid those even in compiled languages, to enable separate compilation).

    I like high level languages too. But optimization can’t be as fast as working with low-level constructs by hand, if the latter is done properly, because of the Halting Problem. It’s really hard to explain what I mean by that, but basically an algorithm to optimize code can rarely do as well as a (perfect) human, so in some situations where speed is critical, low-level constructs (constructs that are “close to the machine”) are prudent. Perhaps low-level and high-level constructs don’t need to be in the same language, but they’re both useful in different situations.

  • Whoops, ignore my example. It doesn’t apply because the interpreter wouldn’t be running f() just because it sees a definition of it. Anyway, if definitions can appear after a use (at run time), then it seems like there would be problems for an interpreter.

  • Your example at least got me thinking a bit more about some of the potential differences in assumptions when interpreting versus compiling. One major difference is that traditionally-compiled languages like C and C++ assume that the syntax tree remains the same throughout the execution of the program. Most languages that tend to be interpreted change the syntax tree during program execution so a particular line of code is not only affected by the value of variables, but also by the current state of the syntax tree.

    Given this, it seems tricky to make a language both easily-compilable and easily-interpretable. Either you have to restrict the interpretability of the language by having a multi-pass interpreter (which would construct the syntax tree in the first pass and actually run the code in the second pass), forcing the user to have all their code ready at interpret-time and reducing the ability to just type lines of code at a prompt, or you have to make it much more difficult for the compiler to perform optimizations by maintaining different syntax trees for each line of code. These are the two options that just came to mind; there are quite likely others that would make an interpreter/compiler solution easier.

    I agree that better high-level constructs could make for easier concurrency optimizations (referring to your first comment). Although it’s a bit off-topic, do you know of any particular problem sets that are difficult to parallelize?

  • I don’t understand what you mean about syntax trees and having a different syntax tree per line of code. Do you mean symbol table?

    “Although it’s a bit off-topic, do you know of any particular problem sets that are difficult to parallelize?”

    Anything with side effects. A statement with side effects might have to finish before any following statements can be run. Again, there’s no algorithm which can determine, in all cases, whether or not the side effects of a statement will impact the future statements. One planned feature of my language is a function which is declared to have no side effects, which would be enforced by the compiler by some conservative rules. Then these side-effect free functions could be parallelized or used with other functional-style language features.

    (Your reply didn’t appear in my RSS feed for the comments of this entry. Your blog software might be broken.)

  • Adam Richard wrote:

    I don’t understand what you mean about syntax trees and having a different syntax tree per line of code. Do you mean symbol table?

    Well the symbol table changes, too. And perhaps that’s all I mean, depending on what you believe the symbol table contains. What I’m saying is that it is perfectly valid in a dynamic language like Ruby to call a method, then re-implement the method, then call it again with the new behavior, all in the same program. For example (code from Stephen Paul Weber):

    class Test
     def method
      puts 'hi'
     end
    end

    a = Test.new
    a.method

    class Test
     def method
      puts 'bye'
     end
    end

    a.method


    This sort of behavior would make Ruby more difficult to run through a compiler because the compiler has to keep track of not only the methods being run, but at what point in the program the methods are being called because that affects which version of the method should be called. Am I making sense?

    Again, there’s no algorithm which can determine, in all cases, whether or not the side effects of a statement will impact the future statements.

    Can you provide a specific example of where a problem requires a long chain of statements with side effects (ie. is not parallelizable)? To understand what I mean, a parallelizable problem in the same class might be the brute force method of factoring semiprimes, where you can check a different set of numbers in a separate thread or process and combine the results later.

    One planned feature of my language is a function which is declared to have no side effects, which would be enforced by the compiler by some conservative rules. Then these side-effect free functions could be parallelized or used with other functional-style language features.

    Cool. That sounds like quite a useful language feature, allowing optimizations similar to the foreach optimization I mentioned.

    (Your reply didn’t appear in my RSS feed for the comments of this entry. Your blog software might be broken.)

    Yes, WordPress does appear to be broken. I have filed a bug: http://trac.wordpress.org/ticket/8405.

  • the compiler has to keep track of not only the methods being run, but at what point in the program the methods are being called because that affects which version of the method should be called. Am I making sense?

    Not really. That feature seems like polymorphism, and I don’t think the compiler has to keep track of those things. It could just rename the different implementations of the “same” method to different methods, and translate method calls to an algorithm that determines which method to call, and then calls it. The algorithm could involve simply looking up a function pointer, which the redefinition of “Test” could change.

    Unless there are other things that can change besides method bodies. Then I’d be less confused. But I can’t think of what other features those might be.

    Can you provide a specific example of where a problem requires a long chain of statements with side effects (ie. is not parallelizable)?

    Printing a linked list in order.

    Or, for a less real-world example but which doesn’t involve I/O, computing the Fibbonaci Sequence:

    int x1 := 0, x2 := 0, x3 := 1;
    while (0 == 0) {
    x1 := x2;
    x2 := x3;
    x3 := x1 + x2;
    }

    Maybe this algorithm can be converted to a parallelizable one which can compute the nth Fibbonaci number without needing the previous results, and multiple processes could compute chunks of numbers. But having a compiler that’s smart enough to automatically deduce that algorithm is, I think, too much to ask.

    (I’m assuming assignment of existing variables is a side effect.)

  • i think you should consider lisp.
    e.g. sbcl

Leave a Reply to Stephen Paul Weber