Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Wren is a small, fast, class-based concurrent scripting language (wren.io)
301 points by ahamez on Aug 28, 2022 | hide | past | favorite | 130 comments


Originally made by Bob Nystrom [1] who now works on Dart, it is now maintained by ruby0x1 who is a game developer and most importantly creator of a game engine that uses Wren as a scripting language [2]. Unfortunately Luxe is still not publicly available (closed beta), which I admit is a bit of a frustration for me as I've been following development for a few years, but it does show a lot of promise and in the meantime I've become a happy user of Godot. I keep Wren in the back of my mind as an easy-to-embed scripting language in cases where you could integrate Lua (which Wren outspeeds) but not LuaJIT.

[1] https://news.ycombinator.com/user?id=munificent

[2] https://luxeengine.com/


TIC-80 [1] (the open-source Pico-8-like fantasy console) supports writing games in Wren.

[1] https://tic80.com/


Oh that's good to know, especially for games which sometimes must go where JIT cannot (consoles, iOS other than javascript)


It's a neat little language. It was created by the author of [Crafting Interpreters][0]; the language in the book (Lox) is essentially a slimmed-down Wren for teaching.

[0]:https://craftinginterpreters.com


And author of the excellent https://gameprogrammingpatterns.com before that!


Wren seems pretty elegant, but it makes the same mistake a lot of languages do when it comes to operator overloading. If you put the binary operator as a method on the first object, then there isn't a simple way to have "builtin_object <operator> your_object".

For instance, Wren doesn't provide complex numbers, so you aren't going to get a nice syntax for "1/z" or similar. I could make similar examples with scalar multiplication for vectors, matrices, or Numpy-style arrays, etc...

Python and other languages have fallbacks to work around this, but Wren does not. And really, I think of binary operators as 2 argument functions, not methods on the left object.


It wasn't a mistake, just a deliberate compromise.

My goal was a very small, simple, but flexible language. It needed to be pretty tiny to be embeddable in lots of applications. (And pragmatically, so that I could design and build the whole language myself.)

Single dispatch works really well for, like 90% of the kinds of operations you want to perform in programs. It reads really well syntactically and avoids having to namespace every function with its argument type like you see in C and Scheme (hash-remove!, dict-remove!, set-remove!, etc.). And it does this without needing types or static resolution. Also, it's simple to compile to something fairly efficient at runtime using a vtable-like structure.

It just sucks for binary operators. But coming up with a better solution for that requires something like:

* Static types and static dispatch like C++, C#, etc. do.

* Multimethods and multiple dispatch.

* Double dispatch like Python.

* Hacking something special into the language for operators like not allowing them to be overloaded and just having a fixed set of types they apply to.

Static types and multimethods are a huge jump in complexity. Python's approach is really slow. Special casing operators makes me sad.

So I just took a relatively small expressiveness hit and decided that, yes, operators aren't symmetric. In practice, it's mostly fine.


Sorry for focusing on such a small part of your comment. I’m learning about language design (as much as I can) and I don’t really understand what you mean by “double dispatch like Python”.

I think (maybe soon thought) that Python has single dispatch. Since you’ve invented languages and work on them I’m pretty much 100% sure I’m wrong and would love to learn why.

I read https://en.m.wikipedia.org/wiki/Multiple_dispatch and came to the conclusion Python has “single dispatch polymorphism” because the method resolution is based on the type of the calling object dynamically at runtime and there is no method signature overloading, which means the argument type(s) doesn’t play a part in picking/resolving the method to be called.

If you have time, do you mind explaining or pointing me to some resources?


I think he meant that for Python operators, the method can be dispatched based on the first or the second argument.


Yes, this is a special case that Python implements for binary operators. When evaluating "a + b", if a.__add__(b) doesn't work, it'll try b.__radd__(a). See https://docs.python.org/3/reference/datamodel.html#object.__...


Yeah - `__add__` then `__radd__` isn't exactly double dispatch, but it's close. (There are corner cases involving inheritance where double-dispatch will work correctly but Python's approach will fail to pick the most-specific method.)


Yes, this is what I was referring to.


It is a mistake, because if you can't do "1.0 - z" or "0.5 * A", I don't know why you have operator overloading at all. What types with useful binary operators don't need to interact with scalars? Sets? Sequences? There aren't many math-y things that don't need numbers. The absence of complex values and matrices makes me sad, and I can list a dozen more on top of those.

Yes, static typing would be a huge change, but I think you're greatly exaggerating the complexity of multimethods for (only) binary operators. You don't want to do it, which is fine. It's your language, and you don't owe anyone anything, but be honest about your rationale.

You've already got a failure path to report errors when the types don't work, so falling back to a single global table of "first_type x second_type => function" is not more "huge" than anything else. You don't need scoping for operators like this, so it's literally one table. You can see the code in your head as you read this. I'm sure you could fill in the details or imagine some other simple implementation if you wanted to.


> I don't know why you have operator overloading at all.

Wren doesn't have "operator overloading". There is no separate feature in addition to the normal operator support that also lets you overload them. Wren just uses single dispatch for infix operators. Even the "built in" operator behavior like addition on numbers is simply pre-defined methods on the Number class. Like Smalltalk (which Wren is heavily inspired by), everything is just a method call and there is little "special" behavior for primitives.


So if I understand this correctly, "a + b" is syntactic sugar for "a.+(b)" (modulo the exact name of the method)?


Yes, exactly. Basically everything is method calls.


"It wasn't a mistake, just a deliberate compromise."

This might describe 80% of all the disagreements people have.


simplicity is definitely valuable, but it's not just operators where this type of thing shows up.


That's why I said "90% of operations" :)


Scala solves this with a syntactic twist: If the (in this case usually symbolic) method name ends in a colon (like say `+:`) it's right associative.

So the following code¹ shows a method call (`prepend`) on the right operand:

  val someList = List(2, 3, 4)
  
  val extendedList = 1 +: someList
  // As the later is just syntax for a regular method call
  // `extendedList` could be also defined with the same result as:
  //  someList.+:(1)
  
  val otherExtendedList = someList.+:(1)
  
  @main def entryPoint =
    println(extendedList)
    println(otherExtendedList)
___

¹ https://scastie.scala-lang.org/y55n254oSVOFmw2CjmckjA


In Haskell, you can just declare which way your operators are supposed to associate (if any). That doesn't seem to cause any problems in practice. (Though it helps that Haskell is statically typed, so the compiler can yell at you when you guess the direction of associativity for eg your extend-a-list operator wrong.


I really look forward to a language with first-class explicit algebras, with operators, laws, equality, types. Rather than ad hocery on multimethods, or kludgery slathered thick over single dispatch, or the ever popular "you just can't get there from here".

"1 is not an object; + is not a message"... I'm failing to find the quote, but it dates back like 40 years to smalltalk, and yet here we still are.


Wouldn't it become very undecidable very fast going down that route? Programming by defining algebras is interesting idea with many practical applications, even in down-to-earth CRUD apps, but how much magic can you really add before it becomes impossible (or even just very hard) to compile?

The closest example in mainstream languages is Haskell, but even there the compiler basically trusts you. Is it possible to add something else while remaining decidable/efficient? I genuinely don't know.


Haskell already has an extension called UndecidableInstances.

Funny enough, even your basic Hindley-Milner style type inference has exponential runtime in the worst case, even if you don't add any bells nor whistles.

In practice, people don't tend to write code that triggers that worst-case.


Am I the only one that likes vanilla Haskell? I find that it quickly becomes messy with all the GHC extensions. I am although more of an SML kind of guy.


Many libraries try to stick to Haskell 98. Also whenever someone writes a paper about some new techniques, they always seem to take a lot of pleasure in pointing out when their technique works in Haskell 98.

I like that you can mix and match GHC extensions even in the same project. So one library (or even just one module) might use some crazy and messy extensions, but you can still use it from vanilla Haskell.

http://dev.stephendiehl.com/hask/#language-extensions has a list of extensions and some judgement on them.

There are a few that are really neat, but don't really cause any mess:

For example, I really like TupleSections. They are not strictly necessary for anything, they are purely cosmetic / syntactic sugar. But they also don't cause any mess. https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/tupl...

Also: TypedHoles are really neat for developing, and will never show up in your final code. https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/type...


GeneralisedNewtypeDeriving is one that I'd consider fairly important as well.


And also fairly benign, yes.

In contrast: GADTs are really important, but they get you pretty far away from Vanilla Haskell.


> but even there the compiler basically trusts you

Yeah, I'm ok with that. I'm more after expressivity than safety or efficiency. Sometimes they're not separable, but confronted with the usual "we don't know how to efficiently compile that, so we won't allow you to express it", my response is always "the language is not the right level at which to choose such engineering tradeoffs - give me finer-grain control, and in program portions where it matters, I'll trade expressiveness for efficiency or safety... after maybe first trying caching and cloud parallelism and patience and external analysis and ... ".


> after maybe first trying [...]

Oops, I got carried away at the end there. Yes, maximally-restricted tolerable representation buys maximized compiler leverage. But might we more often metaprogram that out of diversely-performant maximal power, rather than bootstrapping it on performant but constraining circumscribed power?


Are you thinking about something like AlgebraicJulia, specifically Catlab.jl? It allows to specify Generalized Algebraic Theories and comes with a large list of standard theories. https://algebraicjulia.github.io/Catlab.jl/dev/apis/theories...

@theory Category{Ob,Hom} begin @op begin (→) := Hom (⋅) := compose end

  Ob::TYPE
  Hom(dom::Ob, codom::Ob)::TYPE

  id(A::Ob)::(A → A)
  compose(f::(A → B), g::(B → C))::(A → C) ⊣ (A::Ob, B::Ob, C::Ob)

  (f ⋅ g) ⋅ h == f ⋅ (g ⋅ h) ⊣ (A::Ob, B::Ob, C::Ob, D::Ob,
                                f::(A → B), g::(B → C), h::(C → D))
  f ⋅ id(B) == f ⊣ (A::Ob, B::Ob, f::(A → B))
  id(A) ⋅ f == f ⊣ (A::Ob, B::Ob, f::(A → B))
end


I recently fell in love with Agda and it seems like you'd like it too. It's basically Haskell but even more mathematical, pure, abstract and total.


Agda[1] is fun. Also Idris[2] - more programming language, less proof assistant.

[1] https://wiki.portal.chalmers.se/agda/pmwiki.php https://github.com/agda/agda [2] https://www.idris-lang.org/


Does Julia have this quality?

I'm not suggesting it does, I'm genuinely curious for anyone that knows this area well.


> Does Julia have this quality?

Multiple dispatch, as in Julia, Mathematica, CommonLisp, etc, does get you much closer to this vision than single dispatch. To an extent, but then you're on your own again.

Maybe consider compilation as a collaboration of a computer and a wetware compiler. You have knowledge and do analysis which you are unable to share with the computer compiler. For silly tiny illustration, you might know some size variable has to be an integer power of 2, but most language's type systems don't let you tell the compiler that. Or think of writing a datatype in old C - a struct and a bunch of functions. You have a mental model of how they should all hang together, and then you mentally compile that down and hand emit C code and tests. Or you could hand emit assembly code and tests instead. But the C compiler is the more helpful collaborator. You can talk about more (with some sacrifices), and more easily. Now given say two structs and a bunch of associated functions, multiple dispatch allows weaving them together without pains which in single dispatch so discourage weaving. Which is better, but still, you've a couple of structs, a bunch of functions, and a still limited ability to express your intent so the computer compiler can help you with it.

So you can have multiple dispatch of + and - , and more easily add new numeric types, but any relationship between + and - is still all in your mind - the compiler has no idea that you think them connected.


Yes, Julia has multi-dispatch, which works really well for this type of overloading


Julia avoids the "ad hocery on multimethods, or kludgery slathered thick over single dispatch", but I'm not sure about "first-class explicit algebras, with operators, laws, equality, types".

Would "first-class explicit algebras" involve the language allowing redefining of operator precedences? Julia doesn't allow that. Would it require support for customizable associativity? You can painstakingly do that by defining custom types for every intermediate operation, but it isn't part of base Julia.

The original problem in the thread's start, "builtin_object <operator> your_object", is neatly solved by multiple dispatch. But the ask in your grand-parent comment is much bigger, and Julia only has slightly more support for that compared to other languages (i.e. it's a bit less of a pain in the ass to create such algebras, but you'll still have to create it yourself).


Yes. Though I was thinking of the Julia case as "ad hoc" - you define a couple of multimethods, you intend that they're related in some way, collectively creating some algebra with some properties, but all Julia gets is "ah, some random methods".

> redefining of operator precedences?

That can be confusing, but it's a fun thought. Maybe type-sensitive parsing? Or maybe "^--- the precedence of this operator is ambiguous between the theories visible at this point - please disambiguate which theory's operator you intended, either Int32:+ or ..."?


What are the deficiencies of Haskell in this area, to your mind?


> What are the deficiencies of Haskell in this area, to your mind?

So I knew a wizzy haskell programmer, who repeatedly did the following. They'd deeply understand the mathematics of some problem domain. Use Coq to explore, analyze, and find a solution approach. Design a haskell model to support it. Extend ghc internals to permit that model. And then mentally compile a domain solution down to haskell. Oh, yeah, and the haskell compiler would then emit binary.

Through much of that, the haskell compiler isn't helping. It's serving as an inconveniently inexpressive intermediate representation, the backend target for a wizzy wetware compiler. I so very can't do that - just no way. I'd need a language and compiler which can collaborate with me much more extensively, far earlier in the process. One with which I can describe and discuss the problem domain.

I want to be able to say, drat, looks like I'm stuck yet again implementing another bloody 2D graphics like thing... Ok, give me a bounded discrete affine space over numeric tuples with all the trimmings. And for the implementation type, lets start in with this set of cache-aware tradeoffs. Ok, maybe that will satisfice. Moving on...

I'd like a language which supports a pushout lattice of theories. Given a theory, add a couple of types, operators, or laws, added in any order, and you get to the same theory. Permitting programming which feels like math. Not years of committee discussion to make monads applicative. I don't even need Agda-style "let's prove the world" - I'm mostly ok with Ruby-style "tests are green so it's all good". But I really want to be able to express problem domains, and to work collaboratively with tooling to craft solutions. And that's not (yet?) haskell.


Wow, that's insightful.


:) Decades of frustration, and some fun conversations with good people. Credit for the "pushout lattice of theories" characterization to one - "sounds like what you want is a ...".

Wish I knew how to nudge progress faster. Late 1980's "seems clear I want something X-flavored, and with so much related work, we'll be there soon!" became "progress is sooo slow and diffuse". Became "I still hope to dance code before I die". Became "oh well, not looking likely". Or how to avoid burnout on moonshot spikes... or...


I'm interested to read more about this if you have any links or want to sketch out a design


A couple of years before covid, Boston Haskell meetup group was active and wizzy. And bar discussion would repeated turn to what a "Haskell-NEXT" might look like. Aside from similar bars at conferences, I've not encountered such discussion elsewhere. Having such discussion work has seemed very sensitive to including extreme outlier quality people. Which very isn't me - I mostly just ask questions. They go home, and even though people left have wizzy math backgrounds and are doing their own language implementations... continuing can be hard. And the costs of such discussion in text, or in zoom... I can't picture that. So... I'm unclear on how progress might be fruitfully nudged. The bottleneck seems waiting for research papers to drop, and existing language efforts to incrementally improve, slowly creeping closer to "oh yeah, we could do that soon".

With a big caveat of its been years since I even slightly tried to track the big picture, here are two thoughts on directions which I then felt were getting less attention that I might have liked. First, having wizzy type system power, while not much caring about static typing. I thought of it as sort of a optional-typing CommonLisp attitude. Expressive power, but fine-gain collaborative-compilation tradeoff control over where to pay for what assistance. And second, extremely extensible languages, as can do bulk subsumption of others. So imported modules providing compiler cross-sections, from syntax to type analysis to optimization tweaks. ... Ah well, if humans don't get to it, maybe ML Copilot2030 will give "sure, just express what you want it any mixed mashup of different languages and math, and I'll try guessing what you're trying to say ;)

So... I'm unclear on a fruitful way to proceed...? Sorry... I appreciate the thought.


This is an area where typeclasses (as they're known in Haskell) or traits (as they're known in Rust) really shine.


Or multimethods (as found in CL/Clojure/Dylan/Julia)


I'm surprised to hear that, given how closely Wren's semantics resemble that of Lua's.

Lua uses the left metatable's metamethod for two tables or userdata, or the metamethod if the binary operator has a primitive on one side of the argument.

This gives elegant results at the expense of some implementation complexity: there's no mechanism to figure out what's happening inside the metamethod, all you know is that if both sides are metatable-able, you're in the left side's method.


In C++, you can define a global function for the operator that is overloaded on the number and type of parameters:

  BigInt operator+(BigInt a, int b);
  BigInt operator+(int a, BigInt b);
  BigInt operator+(BigInt a, BigInt b);
  ...


Can’t you solve this particular issue with extension methods the way Kotlin and C# do?


Extension methods rely on overloading, they don't use dynamic dispatch.


Yeah, but typically in these languages operators don’t really need dynamic dispatch. (Although, I agree this causes surprising results).


C# and friends dispatch operators through overloading on static types. Extension methods rely on the same mechanism.

A dynamic language like Wren cannot use such a mechanism. The author already pointed out the options: the current OO-style dispatch, Python-style "if the operator is missing, try the reverse direction", or multimethods like Julia (which are quite complex and heavyweight).


Haven't classes -- as things which house methods and fields together -- gone out of fashion? I think the same thing has happened with inheritance.

The ontological problem of "does the knife cut the cucumber" or "does the cucumber get cut with the knife" seems serious. The "is-a" relationship is also problematic, because does a square inherit both from rhombus and parallelogram?


> Haven't classes -- as things which house methods and fields together -- gone out of fashion?

There's your issue right there, "fashion". If we put aside fashion for a moment, we'll notice that all of the most heavily used languages are multi-paradigm. Trying to be purely-OOP or purely-functional, is intellectually and academically appealing, but not very pragmatic for a general-purpose language. In the end fashion comes and goes, but engineering remains.


> The ontological problem of "does the knife cut the cucumber" or "does the cucumber get cut with the knife" seems serious.

Kinda both! The knife (aka: the "Cutter") can perform a "Cut" action on any object (aka: the "Cutable"), but the Cutable objects have to receive the Cut action and determine if the Cutter has enough "Cutter.Strength" to Cut the Cutable.

I think the failure of OOP is applying it to things that aren't appropriate or creating a Factory pattern that causes over-abstraction. OOP in itself isn't bad, much like PHP or Windows has their place even though they get shit on all the time, but the lack of care applying OOP (or any other pattern or architecture) is the bad part.


> Haven't things which house methods and fields together gone out of fashion?

Rust's structs have methods and fields together, and that seems very much in fashion!

But, I wish that using OOP constructs to model ontologies would go out of fashion faster. The examples you pose are so abstract as to have no correct answer, because the correct answer depends on the needs of the software.

Cucumber/knife could be modeled in a thousand different ways depending on whether you're making The Sims, a recipe website, or the firmware for a robot chef.

"is-a" is indeed problematic, because "is" is a linguistic construct that can have different meanings, and there's never just a single "is-a" relationship. I'd go out on a limb and say you should never need classes for Square, Rectangle, etc; just make a Polygon that is a closed collection of points, and be done with it.

But that doesn't stop OOP education (from web tutorials to university courses) from teaching students to think about ontologies first and requirements second.

These are problems with the way OOP is often used, and with the language features that encourage this kind of usage. But I don't think it's necessary to say that classes in their entirety are an antipattern.


Yes your job is to get things done not to care about ontological purity.

The problem with ontological purity is that it would need complicated features that aren't worth bothering with. Oval vs circle for example is a classic case of parameter dependent typing yet no language on this planet has objects whose type changes as it's data changes.


> The problem with ontological purity is that it would need complicated features that aren't worth bothering with.

I think that might be true, but I don't think that's usually the problem. I think the problem is that there is almost never one "true" ontology, there are usually multiple overlapping ontologies. And in most cases of real programs, you don't need an ontology anyway.

Most programs, if they need to handle ellipses as well as circles, should simply treat everything as an ellipse. If there is a specific time you need to know if something is a circle, write a function to check that. If performance is eaten up checking for circles repeatedly, sort the data into two lists of circular ellipses and non-circular.


> Yes your job is to get things done not to care about ontological purity.

It is part of the job since it changes how (well) things get done.

> no language on this planet has objects whose type changes as it's data changes.

Yes there are. You might want to look into dependently typed languages.


Dependent type systems like Idris do have that.


What on Earth makes you think it's gone out of fashion? Pretty much all of the most widely-used languages have classes or analogues, if not full OOP.


I thought that privileging the first argument of a function, which is all that separates a method from a function, was understood to be an ugly hack. I thought that this was acknowledged now, and only legacy languages are stuck with it. Julia, Rust and Go don't do this. So it's going out of fashion and becoming legacy.

IOW, there was never really a difference between

  obj f(obj);
and

  class obj {obj f();}
so the whole idea was a mistake.

Likewise,

  obj.f()
should be synonymous with

  f(obj)
and the fact that in some languages it's not is evidence of a poorly thought-out abstraction. See multiple dispatch for the death blow.

There's also a discussion elsewhere here about operator overloading, and how arithmetic operators don't behave like methods.


> I thought that privileging the first argument of a function, which is all that separates a method from a function, was understood to be an ugly hack. I thought that this was acknowledged now, and only legacy languages are stuck with it. Julia, Rust and Go don't do this. So it's going out of fashion and becoming legacy. [emphasis added]

What?

Go has methods which "privilege" the first argument to functions:

https://go.dev/tour/methods/1

  func (v Vertex) Abs() float64 {
    return math.Sqrt(v.X*v.X + v.Y*v.Y)
  }
Though it moves the argument out of the argument list and between the func keyword and method name.

Rust also has a notion of methods:

https://doc.rust-lang.org/rust-by-example/fn/methods.html

Were you thinking of other languages?


In Go you can assign this "method" to a func(v Vertex) float64 and it will run just fine: https://go.dev/play/p/NBS4Fc3r0w-

It really is just syntax sugar,


You can have multiple `f` with different "arguments" (if you consider a receiver a part of arguments) without the full multimethod, which is actually pretty hard to get it right and only one language out of your examples (Julia) does that. Other two languages (Rust and Go) fully retain classes, slightly obscured to fit with traits and interfaces respectively.


obj.f() shouldn't necessarily be synonymous with f(obj) though. How would the latter case handle private fields, for example?


You'd use a module to hides things. Some OOP languages conflate modules (which do information-hiding) with private fields in classes. If you think that `obj.f()` and `f(obj)` should mean different things, then your reasoning is likely faulty.


> You'd use a module to hides things.

And than you need functors and existential types to recreate classes… ;-)

Your reasoning is likely faulty if you don't see that OOP is a very advanced and great module system.

The only problem with OOP is that it's often misused for other things that aren't modules. But that's a different story.


To clarify, I’m talking about private fields in each object/struct. You could say all functions defined in the same module get access to private struct fields. That works until you want a more specialized version of some of those functions — essentially, a different module that has access to protected fields.

Here’s another example. You’re writing a function `g` that accepts an object of type `Foo` and calls function `f` on it. You have a default implementation of `f`, but you also want to let the caller supply their own implementation. If you have some way of attaching behavior to a type — classes, typeclasses, etc — this is trivial. If not, your function has to explicitly ask for every single function from the caller.


> You'd use a module to hides things. Some OOP languages conflate modules (which do information-hiding) with private fields in classes.

And you can instantiate multiple instances of these "modules"?


You don't need multiple module instances, the modules just need access to details about the opaque type or private portions of a type. If you've ever seen this in a C header:

  struct foo;
That's an example of an opaque type. Put that in a header, and any function can make use of it in any other C file, they just can't access its innards (well, it's C so you actually can if you know how its contents are packed). Functions can accept them as parameters and return them knowing nothing but that opaque type and some additional function signatures like this (to do something useful with it):

  struct foo make_foo(...); // or struct foo*
  void print_foo(struct foo*); // or struct foo
So I can write something like this:

  // main.c
  #include "foo.h"
  // other includes
  void do_something(struct foo* f) {
    // lots of code
    print_foo(f);
  }
  int main(int argc, char* argv[]) {
    struct foo f = make_foo(a_param);
    do_something(&f);
    return 0;
  }
In a hypothetical C you could do something like this:

  struct foo {
    int a;
    char b;
    private;
  };
That last bit indicating that there is something there, but not what it is. Then somewhere you'd have a fuller definition:

  struct foo {
    int a;
    char b;
    // private expanded
    float c;
  }
Which would be used by functions which are meant to actually operate on the rest of this structure. In this case, anything that knows of struct foo can directly access its "public" fields of a and b, but have no clue about the remaining fields.


If you want to maintain both syntaxes and have there be some sort of equivalence between them, then I think think the type system should handle it. Where a type operator ‘private-access’ grants access to some objects private fields and then obj.f() == f(private-access obj).

Maybe this would be seen as a violation of encapsulation, but to make the two equivalent requires the ability to allow private variable access at the call cite.


The point is that they’re not equivalent. What’s the difference between your proposal and just not having private members at all?


From a mathematical point of view it can seem like that. But now pick a language like Nim, whcih also has a crazy regex to match symbol equality, and unigorm call syntax etc. and the IDE completions are much harder to find.


I wish I had your problems.


Only the misapplications you give examples of have gone out of style, I’d say.


How do you know when it's a misapplication? What do these constructs actually model?


State Machines. Objects are supposed to model state machines, each mutating method is a transition event that takes the current state and any number of arguments and transition the state machine to a new state.

In very simple words, each object in an OOP program is a virtual computer, its class define the instruction set.

"Objects are a recursion on the notion of computer itself"

--Alan Kay


Sounds profound, but I don't know why you'd want to use this metaphor heavily.


State encapsulation is useful insofar as, if you wanted to write a proof that those pieces of state were only ever set correctly (eg using a statechart), you can now do that.

OOP languages never actually include this because nobody seems to particularly care if their code actually works.


Classes are a good feature to have, provided you aren't forced to use them and you don't go overboard with OOP (cough Java). Any equivalent becomes rough in some common use cases. I might be wrong, but it seems like Python approached it from that direction, with Py2 having half-baked classes.

Classes are also nicer to deal with in dynamically-typed languages, or at least ones with very smart automatic static typing. You don't get stuck dealing with all that pedantry.

Btw, your example works fine in OOP. It'd be parallelogram -> rhombus -> square. But yes there are examples of it not working well.


Ah, I meant rectangle instead of parallelogram. Rectangle, rhombus, square. What's the inheritance hierarchy?


Yeah, that's an example of inheritance not working. Python does support multi-inheritance, but it's messy. Best you can do is interfaces instead of subclasses. Honestly can't remember the last time I've subclassed something and actually cared to inherit methods/variables.


That's because there are two separate types of inheritance that often get conflated in popular OOP languages: subclassing (implementation inheritance) and subtyping (specification inheritance). The 'types' here are the interfaces in typical OOP languages, though their (especially Java) failure to orient types around them create much of the pain developers experience. See https://www.cs.utexas.edu/~wcook/papers/InheritanceSubtyping...


No language on earth does data dependent typing.


Right, but some can infer your types with much less manual input than in Java or C++. Rust does more automatically. Or actually C++ is the most automatic if you really abuse templates, but the compiler error messages become impossible to read.


No they haven't. Everyone still loves their type dependent namespaces. Just look at the ugliness of Haskell records where every field is a global function inside a module.


Erratum: Parallelogram should be rectangle. Thanks, newobj.


As a complete aside, Wren was also the name of the in-house renderer at Rhythm and Hues. Used in award winning films like Life of Pi before they went bankrupt.

Anyway not really related, but it’s the first thing that popped into my mind since it was my first gig.


Would definitely appreciate a more in-depth commentary on 'fibres' vs coroutines. Since it seems like there is a difference in 'scheduling' and Wren is using fibres. What are the pros and cons?


Agreed. Janet uses the term fibres to describe its green threads managed with a Lua-like coroutine interface. Is that like what's going on here?


I think this is one of the best languages that i ever seen before. All interested by me concepts are covered, that open ability to write different complex logic for manipulating data.

WoW. Thank you very much for all of that! This is really great, and has powerful potential to become inner-script language for many different projects where must be ability to write some kind of macroses, scripts inside the main enveriopment (game engines, game emulators)

Good job! I feel that you have wide experience in scripting languages that developed inside some big project for inner purposes inside a big app.


I am integrating Wren into a game via wrenbind17. My codebase is C++.

So far it is relatively smooth. Wren itself and the fibers support and language in general are simple and great tools that have helped me a lot.

But I also want to list things I would like to be smoother to integrate.

This is probably half of it Wrenbind17 limitations more than Wren, Idk, but here I go.

1. Cannot expose enum constants easily.

2. Inheritance: if I create a subtype of A, being that subtype B and I creatr in Wren a B subtype, I cannot call A methods. This is painful.

3. Exposing classes was pretty easy BUT exposing state from C++ was a bit more challenging. I had to do something like (by generating code):

``` var State = null

class StateSetter {

     static set(State s) {

            State = s

     }
} ```

On the C++ side a variable pf that type is exposed, set is called on the C++ side and the var State imported.

4. In Chaiscript callbacks can be exposed directly with no wrappers. In Wren I am not sure it is possible. Had to wrap stuff (look at Wrenbind17 fot an example).

5. No static properties or constants made a lot pf boilerplate to expose just enums (had tp wrap in functions each enum value.

All in all Wren is working pretty well. Even I could expose nlohmann json as its own jso type and make it work natively with zero casting as it should work in a dynamic language.


I like about everything except the leading underscores for _fields and for __static_fields. They look really bad, a remnant of the 80s. I understand that one of the goals is keeping the VM small but there must be another way.


Python does it and is still very much alive. It's also idiomatic in C++.


It's definitely not idiomatic in C++. The standard reserves names starting with an underscore and a capital letter or containing double underscores in any scope, and identifiers beginning with an underscore in the global namespace.

Most style guides I've seen that use underscores in identifier names use trailing underscores deliberately to avoid colliding with names reserved by the standard.


Popularity doesn’t mean something isn’t “a remnant of the 80s”, as the previous poster put it.


And PHP still has __construct() and friends. It still looks like a remnant of the '80s, though.


Ugh, I hate this convention. I will grudgingly admit, after spending the last year in a C++ codebase, that it makes reading code somewhat easier to have a clear indicator of whether we're looking a class variable or not, but something about that unnecessary character bugs me.


I work in C++ and removing the confusion between local variables and class members with only one character looks to me as a very good idea!


Is it time for some new characters like an emoji to signify attributes like this?


You could setup a font with special ligatures for something like this, without changing the convention…


I like this a lot. The interoperability with C is especially cool, one of the reasons I liked Rust too. And it covers concurrency well, which scripting languages usually don't.

Edit: Originally complained about lack of async/await but realized that fibers cover the use case for that.


Previously (with comments; several other appearances with no discussion):

https://news.ycombinator.com/item?id=23660464


Thanks! Macroexpanded:

Wren is a small, fast, class-based concurrent scripting language - https://news.ycombinator.com/item?id=23660464 - June 2020 (54 comments)

The Wren Programming Language - https://news.ycombinator.com/item?id=16945227 - April 2018 (30 comments)

Wren: a small, fast, class-based concurrent scripting language - https://news.ycombinator.com/item?id=11611661 - May 2016 (39 comments)

Wren: A classy little scripting language - https://news.ycombinator.com/item?id=8826060 - Jan 2015 (61 comments)


It seems weird to market it as a "concurrent" language. A lot of mainstream general purpose language from the past 30 years have some form of coroutine/generator. Off the top of my head Lua, JS, Ruby, Go, Python etc all have them and Go being an exception which has actual threads iirc. This may be a "me" issue, but hearing it marketed as a "concurrent" issue seems to lead you to believe has real parallel execution.


Wren seems great. Last time I needed to embed a language I used Lua, and I haven't had the need since then, but I've been thinking for years that next time I'll use Wren. It looks really nice. I skimmed through the code a few years ago and it was surprisingly small and seemed very well structured


I thought about using wren for Minecraft modding but threw that idea away because I don't have the time to build a JIT Java bytecode compiler that can compete with Rhino. I think wren is a much better choice than JavaScript or Lua.


> lovingly-commented

Yeeesh! Sorry don't want loving comments, just meaningful concise ones where needed, thanks.


Cool. Can this be embedded in Rust? Does it support wasm32 (probably this is how the web demo runs?)?



Note that the last commit was 8 years ago (when Rust hit 1.0).


Well, basically I copied the second link from https://github.com/wren-lang/wren/wiki/Language-Bindings just randomly; I didn't think to check the release date to be honest with you, my bad.

The other Rust projects seems more updated, like https://github.com/Jengamon/ruwren was last updated in May 9.


Whats wasm32 (vs just normal wasm)?


wasm32 uses 32-bit pointers for memory addresses, instead of 64 (wasm64).

> Why have wasm32 and wasm64, instead of just using 8 bytes for storing pointers?

> A great number of applications don’t ever need as much as 4 GiB of memory. Forcing all these applications to use 8 bytes for every pointer they store would significantly increase the amount of memory they require, and decrease their effective utilization of important hardware resources such as cache and memory bandwidth.

> The motivations and performance effects here should be essentially the same as those that motivated the development of the x32 ABI for Linux.

> Even Knuth found it worthwhile to give us his opinion on this issue at point, a flame about 64-bit pointers.

https://webassembly.org/docs/faq/#why-have-wasm32-and-wasm64...

---

> The x32 ABI is an application binary interface (ABI) and one of the interfaces of the Linux kernel. It allows programs to take advantage of the benefits of x86-64 instruction set..while using 32-bit pointers and thus avoiding the overhead of 64-bit pointers.

https://en.wikipedia.org/wiki/X32_ABI

---

> It is absolutely idiotic to have 64-bit pointers when I compile a program that uses less than 4 gigabytes of RAM. When such pointer values appear inside a struct, they not only waste half the memory, they effectively throw away half of the cache.

https://www-cs-faculty.stanford.edu/~knuth/news08.html


Sorry Knuth, but 64-bit pointers are good for security (PAC and guard regions), performance (tagged pointers), and it saves memory to not have 32 and 64 bit versions of the system libraries.

It can be good to store indexes instead of pointers if you want to shrink a field though.


Is it statically typed?


I have yet to find any nice easy "normal" (like Typescript or Dart) statically typed languages that can be easily embedded. In the Rust space there's Gluon which is statically typed but has a really weird syntax and is functional, and there's Rhai which is really quite nice but dynamically typed.

I guess static type checking is probably more work on its own than writing an entire dynamically typed language.


I’m currently working on Tiny [0], a small statically typed programming languages which is even more minimal than Wren.

[0] https://github.com/goodpaul6/Tiny


Looks interesting. How is the type just `array` in that example though? Doesn't it have to be an array of something?


Interestingly, given the minimal core of the language, arrays are not built in (this is something I'm looking to change, of course).

As such, the "array" type is just an opaque type exposed via the C binding API. Given that there are no generics at the moment, this array API works with "any" values (similar to "any" in e.g. TypeScript).


Very interesting!

BTW: I think I found a typo: "MATCH2("+=", PERCENTEQUAL);" — should be "%=" ??


Thanks! Just fixed this. Seems the "%=" operator is less useful than I thought.


In the lua realm, there is teal. It is very much like a typescript for lua

https://github.com/teal-language/tl


There's mun [1] which is statically typed and AOT compiled.

1: https://github.com/mun-lang/mun


That has no mention of embedding in its description and has LLVM as a dependency so I assume it isn't easy. Looks like a decent language though.


Have you checked out AngelScript?

https://www.angelcode.com/angelscript/


Oh yeah I always forget about this. I had seen it but it feels a bit archaic. Also it completely fails the "examples on the home page" litmus test.


What about just embedding WASM vm then languages like assemblyscript and c/c++,rust,zig,go can target it ?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: