Planet Lisp

Christophe Rhodessbcl method tracing

· 4 days ago

Since approximately forever, sbcl has advertised the possibility of tracing individual methods of a generic function by passing :methods t as an argument to trace. Until recently, tracing methods was only supported using the :encapsulate nil style of tracing, modifying the compiled code for function objects directly.

For a variety of reasons, the alternative :encapsulate t implementation of tracing, effectively wrapping the function with some code to run around it, is more robust. One problem with :encapsulate nil tracing is that if the object being traced is a closure, the modification of the function's code will affect all of the closures, not just any single one - closures are distinct objects with distinct closed-over environments, but they share the same execuable code, so modifying one of them modifies all of them. However, the implementation of method tracing I wrote in 2005 - essentially, finding and tracing the method functions and the method fast-functions (on which more later) - was fundamentally incompatible with encapsulation; the method functions are essentially never called by name by CLOS, but by more esoteric means.

What are those esoteric means, I hear you ask?! I'm glad I can hear you. The Metaobject Protocol defines a method calling convention, such that method calls receive as two arguments firstly: the entire argument list as the method body would expect to handle; and secondly: the list of sorted applicable next methods, such that the first element is the method which should be invoked if the method uses call-next-method. So a method function conforming to this protocol has to:

  1. destructure its first argument to bind the method parameters to the arguments given;
  2. if call-next-method is used, reconstruct an argument list (in general, because the arguments to the next method need not be the same as the arguments to the existing method) before calling the next method's method-function with the reconstructed argument list and the rest of the next methods.

But! For a given set of actual arguments, for that call, the set of applicable methods is known; the precedence order is known; and, with a bit of bookkeeping in the implementation of defmethod, whether any individual method actually calls call-next-method is known. So it is possible, at the point of calling a generic-function with a set of arguments, to know not only the first applicable method, but in fact all the applicable methods, their ordering, and the combination of those methods that will actually get called (which is determined by whether methods invoke call-next-method and also by the generic function's method combination).

Therefore, a sophisticated (and by "sophisticated" here I mean "written by the wizards at Xerox PARC)") implementation of CLOS can compile an effective method for a given call, resolve all the next-method calls, perform some extra optimizations on slot-value and slot accessors, improve the calling convention (we no longer need the list of next methods, but only a single next effective-method, so we can spread the argument list once more), and cache the resulting function for future use. So the one-time cost for each set of applicable methods generates an optimized effective method, making use of fast-method-functions with the improved calling convention.

Here's the trick, then: this effective method is compiled into a chain of method-call and fast-method-call objects, which call their embedded functions. This, then, is ripe for encapsulation; to allow method tracing, all we need to do is arrange at compute-effective-method time that those embedded functions are wrapped in code that performs the tracing, and that any attempt to untrace the generic function (or to modify the tracing parameters) reinitializes the generic function instance, which clears all the effective method caches. And then Hey Bob, Your Uncle's Presto! and everything works.

(defgeneric foo (x)
  (:method (x) 3))
(defmethod foo :around ((x fixnum))
  (1+ (call-next-method)))
(defmethod foo ((x integer))
  (* 2 (call-next-method)))
(defmethod foo ((x float))
  (* 3 (call-next-method)))
(defmethod foo :before ((x single-float))
  'single)
(defmethod foo :after ((x double-float))
 'double)

Here's a generic function foo with moderately complex methods. How can we work out what is going on? Call the method tracer!

CL-USER> (foo 2.0d0)
  0: (FOO 2.0d0)
    1: ((SB-PCL::COMBINED-METHOD FOO) 2.0d0)
      2: ((METHOD FOO (FLOAT)) 2.0d0)
        3: ((METHOD FOO (T)) 2.0d0)
        3: (METHOD FOO (T)) returned 3
      2: (METHOD FOO (FLOAT)) returned 9
      2: ((METHOD FOO :AFTER (DOUBLE-FLOAT)) 2.0d0)
      2: (METHOD FOO :AFTER (DOUBLE-FLOAT)) returned DOUBLE
    1: (SB-PCL::COMBINED-METHOD FOO) returned 9
  0: FOO returned 9
9

This mostly works. It doesn't quite handle all cases, specifically when the CLOS user adds a method and implements call-next-method for themselves:

(add-method #'foo
            (make-instance 'standard-method
             :qualifiers '()
             :specializers (list (find-class 'fixnum))
             :lambda-list '(x)
             :function (lambda (args nms) (+ 2 (funcall (sb-mop:method-function (first nms)) args (rest nms))))))
CL-USER> (foo 3)
  0: (FOO 3)
    1: ((METHOD FOO :AROUND (FIXNUM)) 3)
      2: ((METHOD FOO (FIXNUM)) 3)
      2: (METHOD FOO (FIXNUM)) returned 8
    1: (METHOD FOO :AROUND (FIXNUM)) returned 9
  0: FOO returned 9
9

In this trace, we have lost the method trace from the direct call to the method-function, and calls that that function makes; this is the cost of performing the trace in the effective method, though a mitigating factor is that we have visibility of method combination (through the (sb-pcl::combined-method foo) line in the trace above). It would probably be possible to do the encapsulation in the method object itself, by modifying the function and the fast-function, but this requires rather more book-keeping and (at least theoretically) breaks the object identity: we do not have licence to modify the function stored in a method object. So, for now, sbcl has this imperfect solution for users to try (expected to be in sbcl-1.4.9, probably released towards the end of June).

(I can't really believe it's taken me twelve years to do this. Other implementations have had this working for years. Sorry!)

Quicklisp newsNo May 2018 Quicklisp dist update

· 19 days ago
The computer on which I make Quicklisp builds stopped working a little while ago, and I haven't had time to dive in and work on it. As soon as it's fixed, I'll prepare and release a new dist. Sorry about the inconvenience!

Christophe Rhodessbcl method-combination fixes

· 25 days ago

At the 2018 European Lisp Symposium, the most obviously actionable feedback for SBCL from a presentation was from Didier's remorseless deconstruction of SBCL's support for method combinations (along with the lack of explicitness about behavioural details in the ANSI CL specification and the Art of the Metaobject Protocol). I don't think that Didier meant to imply that SBCL was particularly bad at method combinations, compared with other available implementations - merely that SBCL was a convenient target. And, to be fair, there was a bug report from a discussion with Bruno Haible back in SBCL's history - May/June 2004, according to my search - which had languished largely unfixed for fourteen years.

I said that I found the Symposium energising. And what better use to put that energy than addressing user feedback? So, I spent a bit of time earlier this month thinking, fixing and attempting to work out what behaviours might actually be useful. To be clear, SBCL's support for define-method-combination was (probably) standards-compliant in the usual case, but if you follow the links from above, or listen to Didier's talk, you will be aware that that's not saying all that much, in that almost nothing is specified about behaviours under redefinition of method combinations.

So, to start with, I solved the cache invalidation (one of the hardest problems in Computer Science), making sure that discriminating functions and effective methods are reset and invalidated for all affected generic functions. This was slightly complicated by the strategy that SBCL has of distinguishing short and long method-combinations with distinct classes (and distinct implementation strategies for compute-effective-method); but this just needed to be methodical and careful. Famous last words: I think that all method-combination behaviour in SBCL is now coherent and should meet user expectations.

More interesting, I think, was coming up with test cases for desired behaviours. Method combinations are not, I think, widely used in practice; whether that is because of lack of support, lack of understanding or lack of need of what they provide, I don't know. (In fact in conversations at ELS we discussed another possibility, which is that everyone is more comfortable customising compute-effective-method instead - both that and define-method-combination provide ways for inserting arbitrary code for the effect of a generic function call with particular arguments. But what this means is that there isn't, as far as I know at least, a large corpus of interesting method combinations to play with.

One interesting one which came up: Bike on #lisp designed an implementation using method-combinations of finite state machines, which I adapted to add to SBCL's test suite. My version looks like:

(define-method-combination fsm (default-start)
    ((primary *))
    (:arguments &key start)
  `(let ((state (or ,start ',default-start)))
     (restart-bind
         (,@(mapcar (lambda (m) `(,(first (method-qualifiers m))
                                  (lambda ()
                                    (setq state (call-method ,m))
                                    (if (and (typep state '(and symbol (not null)))
                                             (find-restart state))
                                        (invoke-restart state)
                                        state))))
                    primary))
       (invoke-restart state))))

and there will be more on this use of restart-bind in a later post, I hope. Staying on the topic of method combinations, how might one use this fsm method combination? A simple example might be to recognize strings with an even number of #\a characters:

;;; first, define something to help with all string parsing
(defclass info ()
  ((string :initarg :string)
   (index :initform 0)))
;;; then the state machine itself
(defgeneric even-as (info &key &allow-other-keys)
  (:method-combination fsm :yes))
(defmethod even-as :yes (info &key)
  (with-slots ((s string) (i index)) info
    (cond ((= i (length s)) t) ((char= (char s i) #\a) (incf i) :no) (t (incf i) :yes))))
(defmethod even-as :no (info &key)
  (with-slots ((s string) (i index)) info
    (cond ((= i (length s)) nil) ((char= (char s i) #\a) (incf i) :yes) (t (incf i) :no))))

(Exercise for the reader: adapt this to implement a Turing Machine)

Another example of (I think) an interesting method combination was one which I came up with in the context of generalized specializers, for an ELS a while ago: the HTTP Request method-combination to be used with HTTP Accept specializers. I'm interested in more! A github search found some examples before I ran out of patience; do you have any examples?

And I have one further question. The method combination takes arguments at generic-function definition time (the :yes in (:method-combination fsm :yes)). Normally, arguments to things are evaluated at the appropriate time. At the moment, SBCL (and indeed all other implementations I tested, but that's not strong evidence given the shared heritage) do not evaluate the arguments to :method-combination - treating it more like a macro call than a function call. I'm not sure that is the most helpful behaviour, but I'm struggling to come up with an example where the other is definitely better. Maybe something like

(let ((lock (make-lock)))
  (defgeneric foo (x)
    (:method-combination locked lock)
    (:method (x) ...)))

Which would allow automatic locking around the effective method of FOO through the method combination? I need to think some more here.

In any case: the method-combination fixes are in the current SBCL master branch, shortly to be released as sbcl-1.4.8. And there is still time (though not very much!) to apply for the many jobs advertised at Goldsmiths Computing - what better things to do on a Bank Holiday weekend?

Marco AntoniottiSome updates: bugs fixing and CLAD.

· 33 days ago
Hello there,

it has been a very long time since I posted here, but most recently, thanks to a couple of pesky bug reports on HEΛP by Mirko Vukovic, and because I had a couple of days relatively free, I was able to go back to do some programming, fix (some of) the bugs and post here.

Here is the story.  There were two bugs which I had to deal with (there are more, of course).
  1. A bug triggered by CCL and its implementation of the Common Lisp Reader algorithm.
  2. A buglet due to missing supporting data (.css files in this case) in the deployment of the HEΛP generated documentation.
The first bug was quite difficult to track down and it boiled down to CCL bailing out on READ in an unexpected way (that is, with respect to other implementations).  As an aside, this is a problem of the standard non having a predefined condition for "error caused by the reader because it does not find a package"; LW has CONDITIONS:PACKAGE-NOT-FOUND-READER, but that is not standard, and some implementation just signal READER-ERROR or PACKAGE-ERROR.  The error was easy to "fix" once diagnosed: just don't process files that you know will be problematic, and HEΛP can already "exclude" such files.

The second bug was easier to diagnose, but the fix was more complicated (especially due to the NIH syndrome I suffer from).  The problem is that ASDF moves the compiled code around, but not auxiliary data, like in my case, .css files.  I could have followed what ASDF does, but I decided to go another way and came up with a small library I called Common Lisp Application Data (CLAD, because you need to "dress" your code).

CLAD

By now, at least on Windows 10 and Mac OS X (and Linux), there is a a notion of an Application and Data Folder. The user version of this folder (as opposed to the system one) is ~/Library/ on Mac OS X and %USERPROFILE%\AppData\Roaming\ (this is the "roaming" profile in W10 parlance).  For Linux there are several semi-standards, one of them is the XDG base directory specification; in this case a common solution is to use the ~/.config/ folder.

CLAD assumes these "fixed" locations to create per-application or per-library subfolders of a "Common Lisp" one.  That is, CLAD ensures the presence of the following folders in your account.
  • Mac Os X: ~/Library/Common Lisp/
  • Windows 10: %USERPROFILE%\AppData\Roaming\Common Lisp\
  • Linux/Unix: ~/.config/common-lisp/ (in this case, I am following ASDF lead)
The library exports three simple functions,
  1. user-cl-data-folder, which returns the pathnames of the folders above.
  2. ensure-app-or-library-data-folder, which, as the name implies, ensures that a subfolder exists in the proper location.
  3. app-or-library-data-folder, which returns the pathname associated to a library or app.
A library or app can now set itself up by doing something like

(defparameter *helambdap-data-folder*
  (clad:ensure-app-or-library-data-folder "HELambdaP")
  "The user HELambdaP data folder.")

On Mac OS X, this results in the folder ~/Library/Common Lisp/HELambdaP; a library or an application can now rely on a clear space where to store "common" data files.  For HEΛP it solved the problem of where to find the .css files in a reliable place.

Trivial? Yes.
NIH syndrome? Of course.
Complete? No.
Useful? You be the judge of that.


(cheers)

LispjobsLisp Developer, 3E, Brussels, Belgium

· 36 days ago

See: http://3eeu.talentfinder.be/en/vacature/30101/lisp-developer

You join a team of developers, scientists, engineers and business developers that develop, operate and commercialize SynaptiQ worldwide.

You work in a Linux-based Java, Clojure and Common Lisp environment. Your focus is on the development, maintenance, design and unit testing of SynaptiQ's real-time aggregation and alerting engine that processes time-series and events. This data engine is Common Lisp based.

The objective is to own the entire lifecycle of the platform, that is from the architecture and development of new features to the deployment and operation of the platform in production environment. The position is open to candidates with no knowledge of LISP if they have a good affinity and experience in functional languages.

Christophe Rhodesalgorithms and data structures term2

· 42 days ago

I presented some of the work on teaching algorithms and data structures at the 2018 European Lisp Symposium

Given that I wanted to go to the symposium (and I'm glad I did!), the most economical method for going was if I presented research work - because then there was a reasonable chance that my employer would fund the expenses (spoiler: they did; thank you!). It might perhaps be surprising to hear that they don't usually directly fund attending events where one is not presenting; on the other hand, it's perhaps reasonable on the basis that part of an academic's job as a scholar and researcher is to be creating and disseminating new knowledge, and of course universities, like any organizations, need to prioritise spending money on things which bring value or further the organization's mission.

In any case, I found that I wanted to write about the teaching work that I have been doing, and in particular I chose to write about a small, Lisp-related aspect. Specifically, it is now fairly normal in technical subjects to perform a lot of automated testing of students; it relieves the burden on staff to assess things which can be mechanically assessed, and deliver feedback to individual students which can be delivered automatically; this frees up staff time to perform targeted interventions, give better feedback on more qualitative aspects of the curriculum, or work fewer weekends of the year. A large part of my teaching work for the last 18 months has been developing material for these automated tests, and working on the infrastructure underlying them, for my and colleagues' teaching.

So, the more that we can test automatically and meaningfully, the more time we have to spend on other things. The main novelty here, and the lisp-related hook for the paper I submitted to ELS, was being able to give meaningful feedback on numerical answer questions which probed whether students were developing a good mental model of the meaning of pseudocode. That's a bit vague; let's be specific and consider the break and continue keywords:

x ← 0
for 0 ≤ i < 9
  x ← x + i
  if x > 17
    continue
  end if
  x ← x + 1
end for
return x

The above pseudocode is typical of what a student might see; the question would be "what does the above block of pseudocode return?", which is mildly arithmetically challenging, particularly under time pressure, but the conceptual aspect that was being tested here was whether the student understood the effect of continue. Therefore, it is important to give the student specific feedback; the more specific, the better. So if a student answered 20 to this question (as if the continue acted as a break), they would receive a specific feedback message reminding them about the difference between the two operators; if they answered 45, they received a message reminding them that continue has a particular meaning in loops; and any other answers received generic feedback.

Having just one of these questions does no good, though. Students will go to almost any lengths to avoid learning things, and it is easy to communicate answers to multiple-choice and short-answer questions among a cohort. So, I needed hundreds of these questions: at least one per student, but in fact by design the students could take these multiple-chocie quizzes multiple times, as they are primarily an aid for the students themselves, to help them discover what they know.

Now of course I could treat the above pseudocode fragment as a template, parameterise it (initial value, loop bounds, increment) and compute the values needing the specific feedback in terms of the values of the parameters. But this generalizes badly: what happens when I decide that I want to vary the operators (say to introduce multiplication) or modify the structure somewhat (e.g. by swapping the two increments before and after the continue)? The parametrization gets more and more complicated, the chances of (my) error increase, and perhaps most importantly it's not any fun.

Instead, what did I do? With some sense of grim inevitability, I evolved (or maybe accreted) an interpreter (in emacs lisp) for a sexp-based representation of this pseudocode. At the start of the year, it's pretty simple; towards the end it has developed into an almost reasonable mini-language. Writing the interpreter is straightforward, though the way it evolved into one gigantic case statement for supported operators rather than having reasonable semantics is a bit of a shame; as a bonus, implementing a pretty-printer for the sexp-based pseudocode, with correct indentation and keyword highlighting, is straightforward. Then armed with the pseudocode I will ask the students to interpret, I can mutate it in ways that I anticipate students might think like (replacing continue with break or progn) and interpret that form to see which wrong answer should generate what feedback.

Anyway, that was the hook. There's some evidence in the paper that the general approach of repeated micro-assessment, and also the the consideration of likely student mistakes and giving specific feedback, actually works. And now that the (provisional) results are in, how does this term compare with last term? We can look at the relationship between this term's marks and last term's. What should we be looking for? Generally, I would expect marks in the second term's coursework to be broadly similar to the marks in the first term - all else being equal, students who put in a lot of effort and are confident with the material in term 1 are likely to have an easier time integrating the slightly more advanced material in term 2. That's not a deterministic rule, though; some students will have been given a wake-up call by the term 1 marks, and equally some students might decide to coast.

plot of term 2 marks against term 1: a = 0.82, R² = 0.67

I've asked R to draw the regression line in the above picture; a straight line fit seems reasonable based on the plot. What are the statistics of that line?

R> summary(lm(Term2~Term1, data=d))

Call:
lm(formula = Term2 ~ Term1, data = d)

Residuals:
        Min      1Q  Median      3Q     Max 
    -41.752  -6.032   1.138   6.107  31.155 

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  3.18414    4.09773   0.777    0.439
Term1        0.82056    0.05485  14.961   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 10.46 on 107 degrees of freedom
  (32 observations deleted due to missingness)
Multiple R-squared:  0.6766,    Adjusted R-squared:  0.6736 
F-statistic: 223.8 on 1 and 107 DF,  p-value: < 2.2e-16

Looking at the summary above, we have a strong positive relationship between term 1 and term 2 marks. The intercept is approximately zero (if you got no marks in term 1, you should expect no marks in term 2), and the slope is less than one: on average, each mark a student got in term 1 tended to convert to 0.8 marks in term 2 - this is plausibly explained by the material being slightly harder in term 2, and by the fact that some of the assessments were more explicitly designed to allow finer discrimination at the top end - marks in the 90s. (A note for international readers: in the UK system, the pass mark is 40%, excellent work is typically awarded a mark in the 70% range - marks of 90% should be reserved for exceptional work). The average case is, however, only that: there was significant variation from that average line, and indeed (looking at the quartiles) over 50% of the cohort was more than half a degree class (5 percentage points) away from their term 2 mark as "predicted" from their mark for term 1.

All of this seems reasonable, and it was a privilege to work with this cohort of students, and to present the sum of their interactions on this course to the audience I had. I got the largest round of applause, I think, for revealing that as part of the peer-assessment I had required that students run each others' code. I also had to present some of the context for the work; not only because this was an international gathering, with people in various university systems and from industry, but also because of the large-scale disruption caused by industrial action over the Universities Superannuation Scheme (the collective, defined benefit pension fund for academics at about 68 Universities and ~300 other bodies associated with Higher Education). Perhaps most gratifyingly, students were able to continue learning despite being deprived of their tuition for three consecutive weeks; judging by their performance on the various assessments so far,

And now? The students will sit an exam, after which I and colleagues will look in detail at those results and the relationship with the students' coursework marks (as I did last year). I will continue developing this material (my board for this module currently lists 33 todo items), and adapt it for next year and for new cohorts. And maybe you will join me? The Computing department at Goldsmiths is hiring lecturers and senior lecturers to come and participate in research, scholarship and teaching in computing: a lecturer in creative computing, a lecturer in computer games, a lecturer in data science, a lecturer in physical and creative computing, a lecturer in computer science and a senior lecturer in computer science. Anyone reading this is welcome to contact me to find out more!

Victor Anyakin&#8220;A new way of blogging about Common Lisp&#8221; by Yehonathan Sharvit

· 44 days ago

Seen this post mentioned several times on Twitter, but not on Planet Lisp yet. So, here it is:

A new way of blogging about Common Lisp by Yehonathan Sharvit (@viebel).

Christophe Rhodesels2018 reflections

· 47 days ago

A few weeks ago, I attended the 2018 European Lisp Symposium.

If you were at the 2017 iteration, you might (as I do) have been left with an abiding image of greyness. That was not at all to fault the location (Brussels) or the organization (co-located with ) of the symposium; however, the weather was unkind, and that didn't help with my own personal low-energy mood at the time.

This year, the event was organized by Ravenpack in Marbella. And the first thing that I noticed on landing was the warmth. (Perhaps the only "perk" of low-cost airlines is that it is most likely that one will disembark on to terra firma rather than a passenger boarding bridge. And after quite a miserable winter in the UK, the warmth and the sunshine at the bus stop, while waiting for the bus from Malagá airport to Marbella, was very welcome indeed. The sense of community was strong too; while waiting for the bus, I hopped on to #lisp IRC and Nic Hafner volunteered to meet me at the Marbella bus station and walk with me to my hotel; we ended up going for drinks at a beachside bar that evening with Dimitri Fontaine, Christian Schafmeister, Mikhail Raskin and others - and while we were sitting there, enjoying the setting, I saw other faces I recognised walking past along the beach promenade.

The setting for the conference itself, the Centro Cultural Cortijo de Miraflores, was charming. We had the use of a room, just about big enough for the 90ish delegates; the centre is the seat of the Marbella Historical Archives, and our room was next to the olive oil exhibition.

2018 European Lisp Symposium opening

We also had an inside courtyard for the breaks; sun, shade, water and coffee were available in equal measure, and there was space for good conversations - I spoke with many people, and it was great to catch up and reminisce with old friends, and to discuss the finer points of language implementation and release management with new ones. (I continue to maintain that the SBCL time-boxed monthly release cadence of master, initiated by Bill Newman way back in the SBCL 0.7 days in 2002 [!], has two distinct advantages, for our situation, compared with other possible choices, and I said so more than once over coffee.)

The formal programme, curated by Dave Cooper, was great too. Zach's written about his highlights; I enjoyed hearing Jim Newton's latest on being smarter about typecase, and Robert Smith's presentation about quantum computing, including a walkthrough of a quantum computing simulator. As well as quantum computing, application areas mentioned in talks this year included computational chemistry, data parallel processing, pedagogy, fluid dynamics, architecture, sentiment analysis and enterprise resource planning - and there were some implementation talks in there too, not least from R. "it's part of the brand" Matthew Emerson, who gave a paean to Clozure Common Lisp. Highly non-scientific SBCL user feedback from ELS presenters: most of the speakers with CL-related talks or applications at least mentioned SBCL; most of those mentions by speaker were complimentary, but it's possible that most by raw count referred to bugs - solely due to Didier Verna's presentation about method combinations (more on this in a future blog post).

Overall, I found this year's event energising. I couldn't be there any earlier than the Sunday evening, nor could I stay beyond Wednesday morning, so I missed at least the organized social event, but the two days I had were full and stress-free (even when chairing sessions and for my own talk). The local organization was excellent; Andrew Lawson, Nick Levine and our hosts at the foundation did us proud; we had a great conference dinner, and the general location was marvellous.

Next year's event will once again be co-located with <Programming>, this time in Genova (sunny!) on 1st-2nd April 2019. Put it in your calendar now!

Didier VernaLisp, Jazz, Aikido, 10 years later

· 49 days ago

10 years ago, I published a short blog entitled "Lisp, Jazz, Aikido", barely scratching the surface of what I found to be commonalities between the 3 disciplines. At the time, I had the intuition that those ideas were the tip of a potentially big iceberg, and I ended the blog with the following sentence: "I'd like to write a proper essay about these things when I find the time... someday."

Well, 10 years later, I did. The essay, which is 50 pages long, has been published in the Art, Science, and Engineering of Programming Journal, and actually received the Reviewers'Choice Award 2018. I'm not the bragging type, far from it, but I had to mention this because this essay is so personal, and I invested so much in its preparation (more than 300 hours) that I am as deeply touched by the award as I would have been hurt, had it been negatively received...

The live presentation has unfortunately not been recorded, but I took the time to make a screencast afterwards, which is now available on YouTube. Just like the essay, this presentation is not in the typical setting that you'd expect at a scientific conference...

If you've got an artistic fiber, if you're sensitive to the aesthetic dimension in what you do, you may enjoy this work...

Zach BeaneThoughts on ELS 2018

· 50 days ago

Matt Emerson opened the conference keynote talk on Clozure CL. He also touched on using and celebrating CL. It provided a jolt of enthusiasm and positivity to kick off a fun conference. And it made me want to use Clozure CL more often and learn how to effectively hack on and with it.

Nicolas Hafner's talk on shaders was interesting, but partway through the talk he revealed that the slideshow system itself was an interactive Common Lisp program. Along with standard slideshow behavior, it displayed and animated the classic teapot, with interactive recompilation and redisplay of the shaders.

Ethan Schwartz's talk on Google's tweaks to SBCL's GC was a reminder that you don't have to settle for the system-provided customization knobs. When source is available and you're willing to customize very specifically for your application's workload, you can get nice improvements.

(Also his passing mention of Docker inspired me to work on a Docker image with all the Quicklisp-required foreign libraries baked into it. I think it will be helpful to me for easier testing, and maybe distributed builds. It should be helpful to others for testing everything in Quicklisp without the pain of gathering all the build requirements for its many projects.)

You should check out Douglas Katzman's lightning talk, "Self-modifying code (for fun and profit)". He's added a neat thing in SBCL where the system can be disassembled completely and reassembled into a new binary with various bits of optimization or instrumentation. 

There were many other great talks and discussions!

It's rejuvenating to hear about all the cool things people are working on. There's stuff people are making that you might make use of in your own projects, or inspiration to hack your own things to share with others.

Also: great weather, great city, fun excursion to Ronda, seeing old friends from past conferences, meeting new friends, putting online names to real-life faces, hearing directly from people who use Quicklisp. And Dmitri Fontaine told me about tic80, which my kid and I used to make a retro game in just a few days.

Next ELS conference is in Genova, Italy on April 1, 2019. Start planning today, and I'll see you there!

Quicklisp newsQuicklisp dist update for April, 2018 now available

· 50 days ago
New projects:
  • bst — Binary search tree — GPL-3
  • cl-colors2 — Simple color library for Common Lisp — Boost Software License - Version 1.0
  • cl-env — Easily parse .env files. That's it! — MIT
  • cl-gopher — Gopher protocol library — BSD 2-Clause
  • clunit2 — CLUnit is a Common Lisp unit testing framework. — BSD
  • dml — Diagram Make Language for common lisp — MIT License
  • external-symbol-not-found — Portability library for detecting reader ~ errors coming from reading non-existing or non-external symbols in packages — Unlicense
  • golden-utils — Auxiliary utilities (AU). — MIT
  • linedit — Readline-style library. — MIT
  • lisp-interface-library — Long name alias for lil — MIT
  • ppath — A Common Lisp path handling library based on Python's os.path module — BSD
  • practical-cl — All code from Practical Common Lisp. — BSD
  • quux-hunchentoot — Thread pooling for hunchentoot — MIT
  • s-dot2 — Render Graphviz graphs from within Lisp — BSD-style
Updated projects3d-matrices3d-vectorsagnostic-lizardarchitecture.builder-protocolceplcl-anacl-bootstrapcl-conllucl-cpuscl-dbicl-factoringcl-fadcl-feedparsercl-fixturescl-flowcl-fluent-loggercl-fondcl-gbmcl-glfw3cl-graylogcl-mlepcl-notebookcl-packcl-pythoncl-random-forestcl-readlinecl-sdl2cl-strcl-yesqlclawclemcloser-mopclwebclxcroatoandefenumdjuladocumentation-utils-extensionsdufydynaesrapfare-memoizationfemlispgamebox-dgengamebox-frame-managergamebox-mathglsl-specglsl-toolkithousehu.dwim.delicoinkwellironcladjonathanjoselacklisp-chatlistopialocal-time-durationmaidenmatlispmcclimmitonibblesninevehoclcloverlordoxenfurtparser.common-rulesparsleyperlrepngloadpostmodernqlotrovertg-mathserapeumshadowstaplestumpwmuniversal-configunix-optsutmvarjoxmls.

Removed projects: cl-openid, cl-sendmail, xmls-tools.

The removed projects depend on XMLS, which was recently updated with a slightly different API, and they were never updated to match.

To get this update, use (ql:update-dist "quicklisp").

Enjoy!

TurtleWareMcCLIM demo - Gadgets

· 53 days ago

My second video where I explain CLIM gadget abstraction. This is a live coding session which takes around 1h30m. Enjoy!

If you like it - let me know. If you hate it - let me know. :-)

LispjobsSenior Lisp Developer, RavenPack, Marabella, Spain

· 64 days ago

See: https://www.ravenpack.com/careers/common-lisp-developer

As a Common Lisp Developer, you will be part of the Analytics team which is in charge of the development and maintenance of applications that, among other things, extract data from incoming news and deliver user and machine-friendly analytics to customers.

You will be reporting directly to the Analytics Manager and will work with an international team of developers skilled in Common Lisp, Java, Python and SQL.

The ability to communicate effectively in English both in writing and verbally is a must. Knowledge of Spanish is not a business requirement. European Union legal working status is required. Competitive compensation and a fun working environment. Relocation assistance is available, but remote working is not a possibility for this position.

LispjobsJunior Lisp Developer, RavenPack, Marbella, Spain

· 64 days ago

See: https://www.ravenpack.com/careers/junior-common-lisp-developer

At RavenPack we are searching for a Junior Common Lisp Developer to join RavenPack's Development Team.

As a Junior Common Lisp Developer, you will be part of the Analytics team which is in charge of the development and maintenance of applications that, among other things, extract data from incoming news and delivers machine-friendly analytics to customers.

You will be reporting directly to the Analytics Manager and will work with an international team of developers skilled in Common Lisp, Java, Python and SQL.

The ability to communicate effectively in English both in writing and verbally is a must. Knowledge of Spanish is not a business requirement. European Union legal working status is required. Competitive compensation and a fun working environment. Relocation assistance available, but working remotely is not possible for this position.

Luís OliveiraReddit 1.0

· 79 days ago
As many Lisp programmers may remember, Reddit was initially written in Common Lisp. Then, in late 2005, it was rewritten in Python. Much discussion ensued. Steve Huffman (aka spez, current CEO of Reddit) wrote about why they switched to Python. The late Aaron Swartz also wrote Rewriting Reddit, focusing on the Python side of things.

Last week, that original, Common Lisp version of Reddit was published on GitHub: reddit-archive/reddit1.0. There's no documentation, but we know from the articles above that it ran on CMUCL. reddit.asd is perhaps a good entry point, where we can see it used TBNL, CL-WHO and CLSQL, a common toolset at the time.

It's missing various bits of static HTML, CSS and JS, so I don't think you can actually run this website without a fair bit of scrapping and stitching from the Wayback Machine. The code is quite readable and clean. It's not terribly interesting, since it's a fairly simple website. Still, it's a website that grew to become the 6th most visited worldwide (with 542 million monthly visitors) and now has 230 employees, says Wikipedia, so it's nice to see what it looked like at the very beginning.

Quicklisp newsQuicklisp dist update for March 2018 now available

· 84 days ago
New projects:
  • algebraic-data-library — A library of common algebraic data types. — BSD 3-clause
  • bodge-nanovg — Wrapper over nanovg library for cl-bodge system — MIT
  • can — A role-based access right control library — BSD 2-Clause
  • cl-darksky — Get weather via Dark Sky — BSD 2-clause
  • cl-gimei — random japanese name and address generator — MIT
  • cl-libsvm-format — A fast LibSVM data format reader for Common Lisp — MIT
  • cl-notebook — A notebook-style in-browser editor for Common Lisp — AGPL3
  • cl-octet-streams — In-memory octet streams — GPL-3
  • cl-postgres-plus-uuid — Common Lisp library providing a cl-postgres SQL reader for the PostgreSQL UUID type. — MIT
  • documentation-utils-extensions — Set of extensions for documentation-utils. — MIT
  • fact-base — Simple implementation of fact-base data storage for Common Lisp — AGPL3
  • house — Custom asynchronous HTTP server for the Deal project. — AGPL3
  • inkwell — An API client for the Splatoon 2 Splatnet. — Artistic
  • oxenfurt — A client for the Oxford dictionary API. — Artistic
  • rove — Small testing framework — BSD 3-Clause
Updated projectsarray-operationscellsceplcepl.spaceschecklchipzcl+sslcl-algebraic-data-typecl-bplustreecl-colorscl-dbicl-feedparsercl-fondcl-hamcrestcl-ledgercl-libuvcl-marshalcl-mpicl-online-learningcl-openglcl-openstack-clientcl-protobufscl-randomcl-random-forestcl-rmathcl-scsucl-selenium-webdrivercl-storecl-tomlcl-unicodeclackcloser-mopcomputable-realscroatoandbd-oracledexadoreasy-audioeazy-gnuplotesrapextended-realsfare-memoizationfemlispflexi-streamsfolio2harmonyhumblerlasslatex-tablelispbuilderlistopiallalucernemaidenmatlispmcclimmitoninevehomer-countopticloverlordparachuteparser.common-rulesparser.inipostmodernqlotrandom-statertg-mathscalplserapeumshadowstaplestumpwmtrivial-gray-streamstrivialib.bddutils-ktvarjo.

Removed projects: cl-cudd, lisp-interface-library, plain-odbc, quux-hunchentoot.

The removed projects no longer build.

To get this update, use (ql:update-dist "quicklisp").

Enjoy!

Nicolas HafnerGeometry Clipmaps - Gamedev

· 100 days ago

header
A while back I attempted to implement a technique called Geometry Clipmaps in Trial. This technique is used to stream in terrain data for scenery where the terrain is far too large to be stored in memory all at once, let alone be rendered at full resolution. In this entry I'll go over the idea of the technique and the problems I had implementing it.

First, the technique is described in two papers: Geometry Clipmaps: Terrain Rendering Using Nested Regular Grids by Hoppe and Losasso [1], and Terrain Rendering Using GPU-Based Geometry Clipmaps by Hoppe and Asirvatham [2]. Neither of the papers offer any public source code, or much of any code at all to describe the details of the technique, especially in relation to the update mechanics. This is a problem, because a lot of the details of the papers are very strange indeed.

Before I get into that though, let me summarise the idea of the technique, which is fairly simple: since we cannot draw the full terrain, we need to reduce the detail somehow. Luckily for us, how much detail we can actually see in a rendered scene decreases with distance thanks to perspective skewing. Thus, the idea is that, the farther away the terrain geometry is from the camera, the coarser we render it.

To do this we create a square ring, whose inner sides are half the size of its outer sides. This gives us a power of two scaling, where we can simply render the same ring within itself over and over, scaling it down by two every time. Once we have sufficiently many rings, we simply fill the hole with a remaining square grid. Here's an image to illustrate the idea:

gcm-ring

As you can see, the detail gets exponentially more coarse the farther away from the centre you go. If we look at this from the perspective of a camera that is properly centred, we can see the effect even better:

gcm-perspective

Despite the geometry becoming more coarse, it takes up a similar amount of actual pixels on screen. Now, the geometry I used here is rather simple. In the papers, they use a much more complex layout in order to save on vertices even more aggressively:

gcm-blocks

This, especially the interior trim, makes things a lot more painful to deal with. I'm quite sure the trim is a necessary evil in their scheme, as the ring is no longer a power of two reduction, but they still need the vertices to align properly. I can see no other reason for the trim to exist. Worse than that though, I can absolutely not understand why they specify four variants of the trim, one for each corner, when the largest trim is going to offset the rest of the geometry so much that even if every other ring uses the opposite trim, you cannot correct the shift enough to make things exactly centre again. If someone understands this, please let me know.

Anyway, now that we have these rings established, we need to look at how to actually render geometry with them. The idea here is to use height maps and dynamically shift the vertices' height in the vertex shader according to the brightness of the corresponding pixel in the height map. Since each ring has the same vertex resolution, we can use a fixed resolution texture for each level as well, saving on texture memory.

Let's look at an example for that. In order to generate the textures, we need a base heightmap at a resolution of (n*2^(r-1))^2, where n is the resolution of a clipmap, and r is the number of rings. In this example I used a clipmap resolution of 64 and 4 levels, meaning I needed a base heightmap resolution of 512^2. For the innermost level you simply crop out the center at the clipmap resolution. For the next level you crop out twice as much, and then scale to half, and so on.

gcm-maps

The above are scaled up again using nearest-neighbour to make it better visible to the human eye. Displacing the rings by each corresponding map results in the following image. I've increased the clipmap resolution to 64 to match that of the textures.

gcm-terrain

Looks pretty good, eh? You might notice some issues arising on the edges between clipmap levels though. Namely, some vertices don't match up quite exactly. This is due to smoothing. Since each clipmap level has the same resolution, some of the pixels at the border are going to be too precise. In order to fix this we need to blend vertices as they get closer to the border. To do this we look at the distance of the vertex to the center and, depending on its magnitude, linearly blend between the current level and the next outer one. This means that for each clipmap level we draw, we need to have access to both the current level's heightmap and the next level's.

In the original paper they do this with an insane trick: they encode each heightmap value as a float and stuff the current level's height into the float's integer part, and the next level's into the fractional part. I don't know how this could ever be a good idea given floating point imprecision. I could understand stuffing both numbers into a single int, but this is crazy.

The way I do this in my current approach is different. I represent all the clipmap levels as a single 2D array texture, where each level in the texture represents one clipmap level. We can then easily access two levels, at the cost of two texture lookups of course. I could also see the encoding strategy working by limiting the depth to 16 bit and using 32 bit integer textures to stuff both levels into one. For now though, I'm not that concerned about performance, so optimisations like that seem folly.

Now, keep in mind that in order to figure out the proper smoothing, we need to get the exact position of the vertex of the inner clipmap in the texture of the outer clipmap. This is not that big of a deal if your clipmaps are perfectly aligned and centred, as you can simply scale by 0.5. However, if you implement the clipmap geometry like they suggest in the paper, with the weird trim and all, now suddenly your geometry is slightly shifted. This caused me no end of grief and I was never able to get things to match up perfectly. I don't know why, but I suspect it's something about not doing the calculation quite exactly right, or something about the interpolation causing slight errors that nevertheless remain visible.

I might rewrite the clipmaps using perfectly centred rings as I outlined above, just to see whether I can get it working perfectly, as I really cannot see that huge of an advantage in their approach. To reiterate, the reason why they do it they way they do is, according to my understanding, to find a good balance between number of vertices to keep in memory, and number of draw calls to issue to draw a single geometry layer. If I used perfectly centred rings I would have to draw either 12 smallish square blocks, or one big ring. This is opposed to their technique, which requires 14 draw calls and three different sets of geometry.             Wait what? If that makes you confused, I'm with you. I don't understand it either.

Either way, once you have the interpolation between levels going you're almost there. You can now draw a static terrain with very low computational effort using geometry clipmaps. However, usually you want to have things move around, which means you need to update the clipmaps. And this is where I stopped, as I could not figure out what exactly to do, and could not find any satisfactory information about the topic anywhere else either.

First of all, in the papers they only use a map for the coarsest level layer, and predict the finer levels based on interpolation with a residual. This has two consequences: one, you need to perform a costly ahead of time calculation to compute the residual for the entire terrain. Two, I do not believe that this interpolatory scheme will give you sufficient amount of control over details in the map. It will allow you to save texture space, by only requiring the coarsest level plus a single residual layer, but again I do not believe that such extreme optimisations are necessary.

Second, they use a format called PCT (a precursor of JPEG, I believe) which allows region of interest decoding from a file, meaning you can only read out the parts you really need in order to update the clipmap textures. This is nice, but I'm not aware of any CL library that supports this format, or supports JPEG and allows region of interest decoding.

In order to explain the third problem, let's first look at what happens when we need to update the terrain. Say we have a clipmap resolution of 1cm, meaning each grid cell is 1cm long at the finest level. The player now moves by +3cm in x. This means we need to shift the innermost layer by -3cm in x direction and load in the new data for that into the texture. We also need to update the next layer, as its resolution is 2cm, and we have thus stepped into the next grid cell for it as well. We don't need to step any of the other layers, as they are 4cm, and 8cm resolution respectively, and are too coarse for an update yet. This is fine because they are farther and farther away anyhow and we don't notice the lack of detail change. The edge interpolation helps us out here too as it bridges over the gaps between the layers even when they don't match up exactly.

Now, the way they do this updating in the papers is by drawing into the clipmap texture. They keep a lookup position for the texture which keeps up with the camera position in the game world. When the camera moves they fill in the new detail ahead of the camera's movement direction by drawing two quads. The magic of this idea is that texture lookup happens with the repeat border constraint, meaning that if you go beyond the standard texture boundary, you wrap around to the other end and continue on. Thus you only ever need to update a tiny region of the texture map for the data that actually changed, without having to shift the rest of the data around in the texture. Since this is probably quite hard to imagine, here's another picture from their paper that illustrates the idea:

gcm-update

The region "behind" the camera is updated, but because how the map is addressed happens through wraparound, it appears in front of it. Still, this is quite complicated, especially when you keep in mind that each level is, according to the paper's geometry, shifted by the trim and all. I'm also wondering whether, with my array texture technique, it would be faster to simply replace an entire clipmap texture level and save yourself the cost of having to first upload the two squares to textures, and then perform a draw call into the proper clipmap texture to update it. Since you know that each level has a fixed size, you could even allocate all the storage ahead of time and use a file format that could be directly memory mapped in or something like that, and then just use glSubImage2D to upload it all at once. This is just speculation and musing on my part, so I don't know whether that would be a feasible simplification on current hardware.

So all in all I have my problems with how the update is described in the paper. There is another source for geometry clipmaps, namely this talk by Marcin Gollent about how terrain rendering works in REDEngine 3 for the Witcher 3. He only very, very briefly talks about the clipmaps (5:55-6:55), but it still confuses me to this day, especially the way the clipmaps are illustrated. Let me try to outline what I understand from it:

Each clipmap level has a resolution of 1024² vertices, and we have five levels in total. The map is divided up into tiles of a uniform size, each tile representing 512² vertices on the finest level, meaning you need four tiles to render the innermost level fully. So far so good, but my question is now: what happens if you need to shift, say, the fourth level clipmap by one vertex in x and y? If each tile maps to a separate file on disk, it would mean we need to load in, at worst due to border conditions, 64 files. That seems like a whole lot to me. It would make more sense if you keep the resolution of a tile the same for each level, meaning you always need at worst 8 tiles to update a level. Either way, you end up with a tonne of files. Even just representing their map at the finest level would require 46²=2116 files.

I'm not sure if that's just me being flabbergasted at AAA games, or if there's something I truly don't understand about their technique. Either way, this approach, too, looks like a whole boatload of work to get going, though it does look more feasible than that of the original GPU clipmap paper. So far I haven't implemented either, as I've just been stuck on not knowing what to do. If there's something I'm legitimately not understanding about these techniques I'd rather not dive head first into implementing a strategy that just turns out to be stupid in the end.

I'd love to ask some people in the industry that actually worked on this sort of stuff to clear up my misunderstandings, but I couldn't for the life of me find a way to contact Marcin Gollent. I'm not surprised they don't put their email addresses up publicly, though.

If you have some ideas or questions of your own about this stuff, please do let me know, I'd be very interested in it. I definitely want to finish my implementation some day, and perhaps even develop some tooling around it to create my own heightmaps and things in Trial.

McCLIMSheets as ideal forms

· 107 days ago

CLIM operates on various kinds of objects. Some are connected by an inheritance, other by a composition and some are similar in a different sense.

As programmers, we often deal with the inheritance and the composition, especially since OOP is a dominating paradigm of programming (no matter if the central concept is the object or the function). Not so often we deal with the third type of connection, that is the Form and the phenomena which are merely a shadow mimicking it[1].

Let us talk about sheets. The sheet is a region[2] with an infinite resolution and potentially infinite extent on which we can draw. Sheets may be arranged into hierarchies creating a windowing system with a child-parent relation. Sheet is the Form with no visual appearance. What we observe is an approximation of the sheet which may be hard to recognize when compared to other approximations.

[1]: Theory of forms.

[2]: And many other things.

Physical devices

Sheet hierarchies may be manipulated without a physical medium but to make them visible we need to draw them. CLIM defines ports and grafts to allow rooting sheets to a display server. medium is defined to draw sheet approximation on the screen[3].

How should look a square with side length 10 when drawn on a sheet? Since it doesn't have a unit we can't tell. Before we decide on a unit we choose we need to take into consideration at least the following scenarios:

  1. If we assume device-specific unit results may drastically differ (density may vary from say 1200dpi on a printer down to 160dpi on a desktop[4]!. The very same square could have 1cm on a paper sheet, 7cm and 10cm on different displays and 200cm on a terminal. From the perspective of a person who uses the application, this may be confusing because physical objects don't change size without a reason.

  2. Another possible approach is to assume the physical world distances measured in millimeters (or inches if you must). Thanks to that object will have a similar size on different mediums. This is better but still not good. We have to acknowledge that most computer displays are pixel based. Specifying distances in millimeters will inevitably lead to worse drawing quality[5] (compared to the situation when we use pixels as the base unit). Moreover, conversion from millimeter values to the pixel will most inevitably lead to work on floats.

  3. Some applications may have specific demands. For instance, application is meant to run on a bus stop showing arrival times. Space of the display is very limited and we can't afford approximation from the high-density specification (in pixels or millimeters) to 80x40 screen (5 lines of 8 characters with 5x8 dots each). We need to be very precise and the ability to request a particular unit size for a sheet is essential. Of course, we want to test such application on a computer screen.

I will try to answer this question in a moment. First, we have to talk about the CLIM specification and limitations imposed by McCLIM's implementation of grafts.

[3]: See some general recommendations.

[4]: Technically it should be PPI, not DPI (pixels per inch).

[5]: Given programmer specifies sheet size in integers (like 100x100).

Ports, grafts, and McCLIM limitations

If a port is a physical connection to a display server then graft is its screen representation. The following picture illustrates how the same physical screen may be perceived depending on its settings and the way we look at it.

Graft drawing

As we can see graft has an orientation (:default starts at the top-left corner like a paper sheet and :graphics should start at the bottom left corner like on the chart). Moreover, it has units. Currently, McCLIM will recognize :device, :inches, :millimeters and :screen-sized [11].

That said McCLIM doesn't implement graft in a useful way. Everything is measured in pixels (which :device units are assumed to be) and only the :default orientation is implemented. By now we should already know that pixels are a not a good choice for the default unit. Also, programmer doesn't have means to request unit for a sheet (this is the API problem).

[11]: Screen size unit means that the coordinate [1/2, 1/2] is exactly in the middle of the screens.

Physical size and pixel size compromise

We will skip the third situation, for now, to decide what unit should we default to. There are cognitive arguments for units based on a real-world distance but there are also and practical reasons against using millimeters - poor mapping to pixels and the fact that CLIM software which is already written is defined with the assumption that we operate on something comparable to a pixel.

Having all that in mind the default unit should be device-independent pixel. One of McCLIM long-term goals is to adhere to Material Design guidelines - that's why we will use dip[6] unit. 100dp has absolute value 15.875mm. On 160dpi display it is 100px, on 96dpi display it is 60px, on 240dpi display it is 150px etc. This unit gives us some nice characteristics: we are rather compatible with the existing applications and we preserving absolute distances across different screens.

[6]: Density-independent pixel.

How to draw a rectangle on the medium

Application medium may be a pixel-based screen, paper sheet or even a text terminal. When the programmer writes the application he operates on dip units which have absolute value 0.15875mm. It is the McCLIM responsibility to map these units onto the device. To be precise each graft needs to hold an extra transformation which is applied before sending content to the display server.

Now we will go through a few example mappings of two rectangle borders[7] drawn on a sheet. The violet rectangle coordinates are [5,5], [22,35] and the cyan rectangle coordinates are [25,10], [30,15].

  • MDPI display device units are dip and they match native units of our choosing. No transformation is required.

Graft drawing

  • Some old displays have density 72PPI. Not all coordinates map exactly to pixels - we need to round them[3]. Notice that border is thicker and that proportions are a little distorted. On the other hand despite a big change in resolution size of the object is similar in real-world values.

Windows Presentation Foundation declared 96PPI screen's pixel being the device-independent pixel because such displays were pretty common on desktops. Almost all coordinates map perfectly on this screen. Notice the approximation of the right side of the violet rectangle.

Lower DPI

  • Fact that the screen has higher density doesn't mean that coordinates mapping perfectly on a lower density screen will map well to a higher density one. Take this HDPI screen. Almost all coordinates are floats while on the MDPI display they had all integer values.

HDPI

  • Higher resolution makes rectangles look better (borderline is thinner and distortions are less visible to the eye). Here is XXXHDPI:

XXXHDPI

  • Some printers have a really high DPI, here is imaginary 2560 DPI printer. Funnily enough its accuracy exceeds our screen density so the red border which is meant to show the "ideal" rectangle is a little off (it's fine if we scale the image though).

HQ Printer

  • Until now we've seen some screens with square pixels (or dots). Let's take a look at something with a really low density - a character terminal. To make the illustration better we assume an absurd terminal which has 5x8 DP per character (too small to be seen by a human eye). Notice that the real size of the rectangles is still similar.

Character terminal

It is time to deal with graphics orientation (Y-axis grows towards the top). An imaginary plotter with 80DPI resolution will be used to illustrate two solutions (the first one is wrong!). Knowing the plotter screen height is important to know where we start the drawing.

  • Graft reverts Y axis and sends the image to the plotter. Do you see what is wrong with this picture? We have defined both rectangles in default orientation, so our drawing should look similar disregarding the medium we print on. We do preserve the real size but we don't preserve the image orientation - cyan rectangle should be higher on the plot.

Plotter (bad transformation)

  • Correct transformation involves reverting Y axis and translating objects by the screen height. See the correct transformation (on 80DPI and on MDPI plotter).

80DPI and MDPI plotters

[7]: the Ideal border is composed of lines which are 1-dimensional objects which doesn't occupy any space. Red border in drawings marks "ideal" object boundaries. Points are labeled in device units (with possible fractions).

Sheets written with a special device in mind

There is still an unanswered question - how can we program applications with a specific device limitations in mind? As we have discussed earlier default sheet unit should be dip and default sheet orientation is the same as a paper sheet's[8].

Writing an application for a terminal is different than writing an application for a web browser. The number of characters which fit on the screen is limited, drawing shapes is not practical etc. To ensure that the application is rendered correctly we need a special kind of sheet which will operate on units being characters. Take a look at the following example.

Sheets with different units.

The first sheet base unit is a character of a certain physical size. We print some information on it in various colors. Nothing prevents us from drawing a circle[9] but that would miss the point. We use a character as a unit because we don't want rounding and approximation. Background and foreground colors are inherited.

The second sheet base unit is dip. It has two circles, solid grey background and a few strings (one is written in italic).

Ideally, we want to be able to render both sheets on the same physical screen. The graft and the medium will take care of the sheet approximation. The effect should look something like this.

Different kinds of screens.

The first screen is a terminal. Circles from the second sheet are approximated with characters and look ugly but the overall effect resembles the original application. Proportions are preserved (to some degree). We see also the terminal-specialized sheet which looks exactly as we have defined it.

The second screen is mDPI display. The second sheet looks very much like something we have defined. What is more interesting though is our first sheet - it looks exactly the same as on the terminal.

[8]: Providing means to change defaults requires additional thought and is a material for a separate chapter. McCLIM doesn't allow it yet.

[9]: As we have noted sheets are ideal planar spaces where line thickness is 0 and there is nothing preventing us from using fractions as coordinates.

Port and graft protocols

Now we know what we want. Time to think about how to achieve it. Let me remind you what kind of objects we are dealing with:

  • Port is a logical connection to a display server. For instance, it may contain a foreign handler which is passed to the external system API. It is responsible for the communication - configuring, writing to and reading[10] from a device, we are connected to.

  • Graft is a logical screen representation on which we draw. It is responsible for all transformations necessary to achieve the desired effect on the physical screen. The same port may have many associated grafts for applications with different units and orientations.

  • Medium is a representation of the sheet on a physical device. Sheet is the Form which is a region and may be drawn - it doesn't concern itself with physical limitations.

In the next post I will show how to implement the port and the graft (and a bit of the medium) for the charming backend. I will cover only bare minimum for mediums important to verify that graft works as expected. Complete medium implementation is the material for a separate post.

[10]: Sometimes we deal with devices which we can't take input from - for instance a printer, a PDF render or a display without other peripherals.

Quicklisp newsDownload stats for February 2018

· 107 days ago
Here are the raw download stats for projects in February.

20308  alexandria
16702 closer-mop
15724 babel
15563 cl-ppcre
15531 split-sequence
15002 trivial-features
14598 iterate
13926 trivial-gray-streams
13822 bordeaux-threads
13786 anaphora
12948 let-plus
12777 cffi
12329 trivial-garbage
11742 flexi-streams
10768 nibbles
10530 puri
10329 usocket
9602 trivial-backtrace
9316 more-conditions
9051 chunga
8828 cl-fad
8794 cl+ssl
8641 cl-base64
8295 esrap
8228 chipz
8011 utilities.print-items
7860 named-readtables
7510 drakma
6918 local-time
6803 cl-yacc
6634 ironclad
5743 asdf-flv
5588 parse-number
5555 fiveam
5208 cl-json
5189 closure-common
5171 cxml
5151 log4cl
4556 optima
4476 parser.common-rules
4372 architecture.hooks
4204 cl-unicode
4101 plexippus-xpath
3920 cl-interpol
3779 lparallel
3551 lift
3520 cl-clon
3459 cl-dot
3299 trivial-types
3257 slime
3155 cl-syntax
3143 cl-annot
3136 cxml-stp
3098 cl-store
3000 fare-utils
2997 fare-quasiquote
2973 xml.location
2971 cl-utilities
2853 utilities.print-tree
2853 trivial-utf-8
2850 static-vectors
2814 fare-mop
2812 inferior-shell
2741 quri
2718 metabang-bind
2686 trivial-indent
2664 md5
2638 documentation-utils
2549 fast-io
2545 uuid
2542 array-utils
2378 plump
2275 djula
2275 symbol-munger
2271 cl-slice
2232 collectors
2225 gettext
2190 arnesi
2189 hunchentoot
2162 access
2151 cl-parser-combinators
2146 cl-locale
2028 simple-date-time
2019 ieee-floats
1889 prove
1831 yason
1603 asdf-system-connections
1588 metatilities-base
1574 cl-containers
1525 rfc2388
1471 postmodern
1460 fast-http
1441 trivial-mimes
1358 salza2
1338 monkeylib-binary-data
1334 cl-colors
1305 jonathan
1292 proc-parse
1277 xsubseq
1253 cl-ansi-text

Nicolas HafnerPipeline Allocation - Gamedev

· 109 days ago

header
This is a follow-up to my previous entry, where I talked about the shader pipeline system. However, I specifically excluded the way resources for shaders are allocated in that article. We'll get to that now, since I finally got around to rewriting that part of the engine to be more complete.

As you may recall, each rendering step is segregated into a shader stage. This stage may either render all on its own, render a scene directly, or influence the render behaviour of the scene. However, each stage will need buffers to draw to that can act as the inputs to future stages. This means that we have an allocation problem on our hands: ideally we'd like to minimise the number of buffers needed to render the entire pipeline, without having any of the stages accidentally draw into a buffer it shouldn't.

Let's look at the following pipeline, which consists of five stages:

pipeline

The first stage renders the scene as normal, and produces a colour and depth buffer as a result. The second stage renders the scene but every object is drawn pitch black. It only produces a colour buffer, as depth is irrelevant for this. We then blend the two colour buffers together while radially blurring the black one. This results in a "god ray" effect. The next stage only renders transparent objects. This is usually done in order to avoid having to perform expensive transparency calculations on opaque objects. Finally the two stages are rendered together, using the depth and alpha information to blend. This produces the final image.

In a naive implementation, this would require seven buffers, five colour buffers and two depth buffers. However, you can do with just three colour buffers instead, by re-using the colour buffers from the first two stages for the fourth and fifth stages. This kind of allocation problem, where a graph consists of nodes with discrete input and output ports, has cropped up in multiple places for me. It is a form of a graph colouring problem, though I've had to invent my own algorithm as I couldn't find anything suitable. It works as follows:

  1. The nodes in the DAG are topologically sorted. We need to do this anyway to figure out the execution order, so this step is free.
  2. Count the number of ports in the whole graph and create a list of colours, where each colour is marked as being available.
  3. Iterate through the nodes in reverse order, meaning we start with the last node in the execution order.
  4. For each already coloured port in the node, if the port is noted as performing in-place updates, mark the colour as available. This is never the case for OpenGL textures, but it is useful for other allocation problems.
  5. For each input port in the node:
    1. If the port on the other side of the connection isn't coloured yet, pick the next available colour from the colour list.
    2. Set this colour as the colour of the other port.
    3. Mark the colour as unavailable.
  6. For each output port in the node:
    1. If the port isn't coloured yet, pick the next available colour from the colour list.
    2. Set this colour as the colour of the port.
    3. Mark the colour as unavailable.
  7. For each port in the node:
    1. If the node is coloured, mark the colour as available again.

This algorithm needs to be repeated separately for each incompatible buffer type. In our case, we need to run it once for the depth buffers, and once for the colour buffers. Since the algorithm might be hard to understand in text, here's a small animation showing it in action:

allocation

As far as I can tell, this algorithm performs the optimal allocation strategy in every case. I don't think it's amazingly fast, but since our graphs will be relatively small, and won't change very often, this is a negligible cost.

Unfortunately this allocation is not everything yet. In order to be able to figure out which buffers are compatible, we need to perform another analysis step beforehand. Namely, we need to figure out which texture specifications are "joinable". Some shader passes might have special requirements about their buffers, such as specific resolutions, internal formats, or interpolation behaviour. Thus, in order to know whether two buffers can actually potentially be shared, we first need to figure out whether the requirements can be joined together. This turns out to be a tricky problem.

Before we can get into this, I think it's worth explaining a bit more about how this stuff works in OpenGL. Two components are essential to this, namely textures and frame buffers. Any OpenGL context has a default frame buffer, which is typically the window you draw to. This frame buffer has a colour buffer, and a depth and stencil buffer if you activate those features. However, since the output of this frame buffer is internal to OpenGL, you can't capture it. If you need to re-process the contents, you'll have to use custom frame buffers. Each frame buffer has a number of attachments, to which you need to bind fitting textures. When the frame buffer is bound, a rendering call will then draw into the bound textures, rather than to the window.

Previously Trial simply automatically determined what kind of texture to use based on the window dimensions and the attachment the output was supposed to be for. Thus allocation was trivial as I just needed to run it once for each kind of attachment. However, this is not optimal. Certain effects, such as HDR require other features in textures, such as a specific internal format. Once you give the user the ability to completely customise all texture options things become difficult. Textures have the following attributes:

  • width The width of the texture. This applies to all texture targets.
  • height The height of the texture. This only applies to 2D, 3D, or array textures.
  • depth The depth of the texture. This only applies to 3D, or 2D array textures.
  • target The texture target, which encodes the number of dimensions, whether it's an array, and whether it has multisampling.
  • samples If it is a multisample texture, how many samples it supports.
  • internal format The internal pixel storage format. I'll mention this specifically later.
  • pixel format The external pixel format when loading pixel data into the texture. Closely related to the internal format.
  • pixel type The external pixel type when loading pixel data into the texture. Closely related to the pixel format.
  • magnification filter What algorithm to use when the texture is scaled up.
  • minification filter What algorithm to use when the texture is scaled down.
  • anisotropy What level of downscaling anisotropy to use.
  • wrapping How access outside of the normalised texture coordinates is handled.
  • storage Whether the texture storage can be resized or not.

That's quite a few properties! Fortunately most properties are either trivially joinable, or simply incompatible. For instance, the following are incompatible:

target, pixel format, pixel type, magnification filter, minification filter, wrapping, storage

And the following are trivially joinable by just choosing the maximum:

samples, anisotropy

The biggest problems are the dimensions, and the internal format. I'm as of yet undecided what to do with the dimension constraints, since in some cases the constraint the user writes down might just be a lower bound advice and could be upgraded to a bigger size, but in others an exact size might be expected.

The internal format however is the real kick in the dick. OpenGL has a plethora of different internal formats that may or may not be joinable. Here's a list of some:

red, r8i, r16-snorm, rgb32ui, rgba16f

Not... too bad, right? Well, unfortunately that's not all. How about these formats?

rgba2, rgb5-a1, r3-g3-b2, rgb9-e5, rgb10-a2ui, srgb8-alpha8

Or how about these?

compressed-srgb-alpha-bptc-unorm, compressed-signed-rg-rgtc2

That's not even all of them, and even worse, the complete list does not cover all possible combinations! For instance, there is no rgb2 even though there's an rgba2. There's also no r6-g6-b5, and no rgb16-e9, etc. Some of these formats are also simply insane to consider. I mean, how is a r11f-g11f-b10f format supposed to work? That's supposedly 11 bits for red and green each, and 10 bits for blue, but as floats. What??

Putting the insanity of the formats aside, the task of joining them is, fortunately, somewhat clear: a format can be joined with another of equal or greater number of colour components. A format can be joined with another of equal or greater bits for each shared component. A format can be joined with another if the required colour type matches, and the required features such as compression and srgb match. To put this into examples:

  • redred = red
  • redrg = rg
  • redr16 = r16
  • rgr32 = rg32
  • rgb32frgb8i = ∅
  • compressed-redred = ∅

You could argue that a non-compressed format could be joined with a compressed one, but for now I've taken the stance that, if the user really requests such a thing, the use is probably pretty special-purpose, so I'd rather not muck with it.

In order to do this in a somewhat sane manner that isn't explicitly listing all possible upgrades, I've written a machinery that can decompose the internal formats into a sensible plist that documents the precise requirements of the format. Based on this, a joining can then be performed much more easily. Finally, the resulting joined format is then recomposed into a matching internal format again. This last step is currently not optimal, as sometimes a join might result in an inexistent format, in which case further upgrading of the join might be necessary. However, for my current needs it is good enough, and I honestly doubt I'll ever reach the point where I need to touch this again.

Alright, so now that we know how to join internal formats, we can join texture specifications. Thus, as a step before the allocations, the system gathers all texture specifications from the ports, normalises them, and then repeatedly joins every spec with every other spec, replacing the two inputs with the result if it succeeds, or keeping it if it doesn't. At some point the set of specs won't change anymore, at which point we have the minimal texture spec set.

We then run the algorithm for each spec in this set, only considering ports whose spec can be joined with it. This results in a perfect allocation that respects the texture constraints each pass might have. A last, additional step could be performed here that I do not do (yet): since some ports might not get shared due to the allocation constraints, their texture specs could be relaxed again, so we would need to recompute the join for all ports in the same colouring group.

I've rewritten this system as part of the asset rewrite, since I thought that if I break things I might as well break them hard. Trial is not quite yet back on its feet right now, but I can at least compile a minimal pipeline and run a window, so it's very close to being functional again.

As a next step from here, I think I'll work some more on UI stuff. I know I promised an article on geometry clipmaps, and that is coming, but it'll be another while. Things have been surprisingly busy recently, and I've also been surprisingly lazy as of late. Hopefully both of those will improve soon!

LispjobsSoftware Engineer (Clojure), Chatterbox Labs, UK

· 110 days ago

From this post: http://blog.chatterbox.co/recruiting-software-engineer/

Machine Learning; Artificial Intelligence; Natural Language Processing; Functional Programming; Java; Clojure; Lisp

Chatterbox Labs are looking for a Software Engineer to join our, UK-based team to help both implement our cognitive technologies. You should hold an MSc in the field of Computer Science (or a related/similar field) and have experience in a commercial software environment.

You will join our technical team who focus on the research and development of our multi-lingual Cognitive Engine for short form and long form text classification & our Synthetic Data Generator, alongside our Data Scientists.

This diverse role will allow you to both collaborate on machine learning research and to also implement commercial software.

Responsibilities
  • Develop, extend and maintain underlying machine learning within the Cognitive Engine & Synthetic Data Generator
  • Implement & test Clojure & Java code for commercial release
  • Research, explore & evaluate new methods for classification & data synthesis
  • Co-create new Intellectual Property alongside the Data Science team
  • Test and evaluate current technologies and make recommendations for improvements
Required Qualifications
  • MSc in Computer Science or related field
  • Demonstrable experience of using a functional language, e.g. Clojure, Scala, Racket, Common Lisp, Haskell, Erlang, etc
  • Commercial experience using a Java Virtual Machine language (Java, Clojure, Groovy, etc)
  • Experience using collaborative software management tools (git, subversion, etc)
  • Strong problem solving and algorithm development skills
  • The desire to build robust, proven and tested technologies
Desirable Qualifications
  • Experience of developing machine learning software in a commercial environment
  • Experience either researching or applying:
    • Reinforcement Learning
    • Deep learning techniques
  • Experience dealing with large datasets
  • Experience with large-scale processing frameworks (e.g. Hadoop, Spark)
Minimum Education Level

MSc in Computer Science or related field

Language Requirements
  • Spoken English language is essential
  • A second language is useful
To apply

Please send an email to careers@chatterbox.co with a cover note introducing yourself and a bit about you, and attach a comprehensive CV in pdf format.

Quicklisp newsQuicklisp dist update for February 2018 now available

· 111 days ago
New projects:
  • base-blobs — Base system foreign library collection — MIT
  • bodge-glad — OpenGL 4.6 Core GLAD wrapper for cl-bodge system — MIT
  • bodge-glfw — Wrapper over glfw3 library — MIT
  • bodge-nuklear — Wrapper over nuklear IM GUI library for cl-bodge system — MIT
  • bodge-ode — Thin wrapper over Open Dynamics Engine — MIT
  • bodge-openal — Thin wrapper over OpenAL cross-platform 3D audio API — MIT
  • bodge-sndfile — Wrapper over libsndfile for cl-bodge system — MIT
  • cl-flow — Data-flow driven concurrency model for Common Lisp — MIT
  • cl-muth — Multithreading utilities — MIT
  • cl-selenium-webdriver — cl-selenium-webdriver is a binding library to the Selenium 2.0 — MIT
  • cl-toml — TOML v0.4.0 parser and encoder — MIT
  • clutz — Cross-platform utility toolkit for supporting simple Common Lisp OpenGL applications — MIT
  • glad-blob — GLAD foreign library collection — MIT
  • glfw-blob — GLFW foreign library collection — MIT
  • hdf5-cffi — hdf5-cffi is a CFFI wrapper for the HDF5 library. — BSD
  • listopia — This is no official port of Haskell package Data.List — LLGPL
  • matlisp — Matlisp is a scientific computation library for Common Lisp. — LLGPL
  • nanovg-blob — Foreign library collection of nanovg 2d drawing library — MIT
  • nuklear-blob — Nuklear IM GUI foreign library collection — MIT
  • ode-blob — Foreign library collection of ODE 3d physics library — MIT
  • openal-blob — OpenAL Soft foreign library collection — MIT
  • simple-flow-dispatcher — Reference implementation of a dispatcher for cl-flow library — MIT
  • sndfile-blob — SNDFILE foreign library collection — MIT
Updated projects3bgl-shaderahungry-fleecearchitecture.service-providerasd-generatorbodge-blobs-supportbodge-chipmunkcellsceplcepl.glopcepl.sdl2cepl.spacescl-algebraic-data-typecl-anacl-ansi-termcl-charmscl-conllucl-csvcl-emojicl-gobject-introspectioncl-mongo-idcl-oclapicl-openglcl-pcgcl-permutationcl-readlinecl-skkservcl-soilcl-strcl-svgclawclmlcloser-mopclwebclxcommon-lisp-statconfiguration.optionscroatoandeflatedocumentation-utilsdoubly-linked-listdufyfare-scriptsfiascofiveamflac-metadataflac-parserforgamebox-dgengamebox-ecsgamebox-frame-managergamebox-mathgamebox-sprite-packergenieglsl-specinquisitorironcladiteratejonathanjsonrpckenzolisp-chatlispbuilderlocal-timemaidenmcclimmd5mglmitoninevehoclclosicatoverlordpaiprologparse-numberparsleyplumppngloadpostmodernpp-tomlpsychiqpuriqtools-uiquux-hunchentootremote-jsrtg-mathrutilssanitized-paramsserapeumsha3shadowshortysimple-loggerskitterspecialization-storesplit-sequencestumpwmtype-rubiquitousvarjoverbosewith-setf.

Removed projects: algebraic-data-library, cl-curlex, cl-forest, cl-gpu, cl-indeterminism, cl-larval, cl-read-macro-tokens, cl-secure-read, cl-stm, defmacro-enhance, fs-utils, funds, lol-re, stump-touchy-mode-line, yaclanapht.

There is some extra churn this month. It's mostly because of the failure of cl-curlex, a library that uses SBCL internals and which no longer works as of the latest SBCL. Many projects (all by the same author) depend on cl-curlex, and they all broke. There hasn't been any response to my bug report yet.

The other removals are due to author request, neglect, or other breakage. If you'd like to restore a removed project to Quicklisp, let me know - I'm happy to help.

To get this update, use (ql:update-dist "quicklisp").

Enjoy!

Zach BeaneParadigms of AI Programming is now available as a free PDF

· 112 days ago

Peter Norvig has published Paradigms of AI Programming on github. There are PDFs of the entire book along with a plain-text version and separate files with the code.

I’m not exactly sure how this came about (Robert Smith hints at it on twitter) but it’s a fantastic development. I recommend Paradigms of AI Programming to anyone who wants to learn Common Lisp, and having it freely available will help a great many people.

I avoided this book for a while because I had little interest in AI. But PAIP isn’t really about AI. The book is about programming effectively, with Common Lisp as the language and classical AI problems as the context. The tools and techniques you take away from careful study are applicable in many, many situations.

McCLIMMcCLIM 0.9.7 "Imbolc" release

· 124 days ago

After 10 years we have decided that it is time to make a new release - the first one since 2008, which was McCLIM 0.9.6, St. George's Day. Imbolc is a Gaelic traditional festival marking the beginning of spring held between the winter solstice and the spring equinox.

Due to a long period of time, the number of changes is too big to list in full detail and we will thus note only major changes made during the last eleven iterations (though many important changes were done before that). For more information please check out previous iteration reports on McCLIM blog, git log and the issue tracker. We'd like to thank all present and past contributors for their time, support and testing.

  • Bug fix: tab-layout fixes.
  • Bug fix: formatting-table fixes.
  • Bug fix: scrolling and viewport fixes and refactor.
  • Feature: raster image draw backend extension.
  • Feature: bezier curves extension.
  • Feature: new tests and demos in clim-examples.
  • Feature: truetype rendering is now default on clx.
  • Feature: additions to region, clipping rectangles and drawing.
  • Feature: clim-debugger and clim-listener improvmenets.
  • Feature: mop is now done with CLOSER-MOP.
  • Feature: threading is now done with BORDEAUX-THREADS.
  • Feature: clx-fb backend (poc of framebuffer-based backend).
  • Feature: assumption that all panes must be mirrored has been removed.
  • Cleanup: many files cleaned up from style warnings and such.
  • Cleanup: removal of PIXIE.
  • Cleanup: removal of CLIM-FFI package.
  • Cleanup: changes to directory structure and asd definitions.
  • Cleanup: numerous manual additions and corrections.
  • Cleanup: broken backends has been removed.
  • Cleanup: goatee has been removed in favour of Drei.
  • Cleanup: all methods have now corresponding generic function declarations.

We also have a bounty program financed with money from the fundraiser. We are grateful for financial contributions which allow us to attract new developers and reward old ones with bounties. Currently active bounties (worth $2650) are available here.

As Imbolc marks the beginning of spring we hope this release will be one of many in the upcoming future.

Nicolas HafnerShader Pipelines and Effects - Gamedev

· 132 days ago

header
In the last entry I talked about Trial's way of handling shaders and how they are integrated into the object system to allow inheritance and composition. This time we'll take a look at how that system is extended to allow entire pipelines of effects and rendering systems.

Back in the olden days you could get away with a single pass to render everything. In fact, the fixed-function pipeline of GL 2.1 works that way, as do many simple rendering systems like primitive forward rendering, or most 2D games. However, more complex systems and effects require multiple passes. For instance, a deferred rendering system requires at least two passes, one that emits a "gbuffer", and another that combines the information from it to produce the final image. Another example would be a "god-rays" effect, which requires rendering the scene with all objects black, applying a radial blur to that, and then finally blending it with a rendering of the scene as normal.

These kinds of pipelines can become quite complex, especially when you start to consider that each pipeline might require multiple kinds of textures to use as buffers, inputs, and outputs. The management and allocation of these resources is a pain to do manually, so Trial includes a system to reduce the process to a graph definition. This allows you to write complicated pipelines without having to worry about any of the underlying buffers. The pipeline part is only one hand of the equation, but we'll get to the rest later.

To do the rendering, a stage will require a set of textures to output to, and a set of textures as inputs to the stage. A stage might offer additional parameters aside from that to tweak its behaviour, but those do not require allocation, so we can disregard them for now.

To give a bit of context I'll explain how these textures are attached to the shaders in OpenGL. I've glossed over that in the previous entry as it wasn't directly relevant. Let's consider a simple fragment shader that simply copies the colour from an input texture to an output texture:

uniform sampler2D input;
out vec4 color;

void main(){
  color = texelFetch(input, ivec2(int(gl_FragCoord.x), int(gl_FragCoord.y)));
}

Every GLSL file begins with a set of variable declarations. Each of these global variables has a storage qualifier that tells OpenGL where it comes from. uniforms are parameters that can be set from the CPU before the rendering is performed and are the same for all of the stages in the rendering. outs are variables that are sent to the next stage. The last important qualifier that isn't mentioned above is in, which takes variables from a previous stage. Every file must also contain a main function that acts as the entry point for the computation. In this case we perform a texture read at the current fragment position and store it into the output color - effectively copying it.

The way in which outputs and inputs differ should become apparent here. Inputs are declared as uniforms, which need to be set by the application before each rendering call. The outputs on the other hand are part of the framebuffer target, which is much more implicit. A frame buffer is a special kind of object that points to a set of textures to which the rendering is output. The default frame buffer will draw to your window so that things can actually be seen. This discrepancy between inputs and outputs is a bit of a pain, but it is simply a consequence of how OpenGL evolved. If I had to design a graphics API like this today I would definitely treat input and output buffers in a much more similar manner.

Whatever the case may be, we have the ability to fix this in Trial by giving the user a uniform view over both inputs and outputs to a shader stage. For instance, it would be nice if we could simply say something like the following:

(define-shader-pass renderer ()
  ((:output color)))

(define-shader-pass blur ()
  ((:input previous)
   (:output color)))

Then when we need to define an actual shader pipeline we could just do something like this:

(let ((pipeline (make-pipeline))
      (renderer (make-instance 'renderer))
      (blur (make-instance 'blur)))
  (connect (output color renderer) (input previous blur) pipeline)
  (pack pipeline))

What you see above, you can do in Trial, almost exactly. The syntax is a bit different, but the concept is much the same. The pack step analyses the graph formed by the pipeline and automatically allocates the minimal number of texture buffers needed, re-using them where possible. This is especially useful if you have a long chain of post-effects.

As the texture allocation system is implemented right now it's unfortunately pretty suboptimal. There's some severe limitations that I need to fix, but I have a good idea on how to do that, so I'll probably get around to doing that soon. Still, for most of my work so far the current approach has sufficed, and the general idea is clear.

With the pipeline and allocation system in place, one half of the equation is solved. The other half requires a bit more ingenuity. So far the passes we've looked at did one of two things: either simply render the scene to a buffer, or perform some kind of buffer to buffer transformation like blurring. However, most actually interesting shaders will want to do more complex things than that - they'll want to not only render objects, but also influence how they are rendered.

But, hold on, don't the objects themselves decide how they are rendered? And now the pass should suddenly decide what's going on? Well yes, and yes. Some passes will want to completely influence how every object is drawn, some will want to influence how objects are drawn, and yet some will not care how the objects are drawn at all. Allowing this is the second part of the shader pipeline protocol. As you know, modelling protocols with CLOS is my favourite thing to do, so strap in.

(define-shader-entity shader-pass () ())

(defgeneric register-object-for-pass (pass object))
(defgeneric shader-program-for-pass (pass object))
(defgeneric make-pass-shader-program (pass class))
(defgeneric coerce-pass-shader (pass class type spec))

The above class definition is not quite accurate, but for simplicity's sake I've shortened it to something that will be clear enough for our purposes here, and will link back to the previous entry appropriately. Each shader pass is a shader entity, and as such allows you to attach shader code to the class definition.

The first function, register-object-for-pass, is used to notify a shader pass about an object that would like to draw in the current scene. This is necessary for any shader pass that would like to influence the drawing of an object, or leave it up to the object entirely. In both of those cases, the pass will compute a new effective shader for the object and compile that into a shader program to use. This program can then be retrieved by the object with shader-program-for-pass. Thus, whenever an object is painted onto a stage, it can use this function to retrieve the shader program and set the necessary uniforms with it. Stages that would like to supplant their own rendering method can simply ignore these two functions.

The make-pass-shader-program function is responsible for combining the shader programs from the given class and pass, and to produce a useful shader program resource object as a result. This function, by default, simply does some book keeping and calls coerce-pass-shader to do the actual combination. coerce-pass-shader in turn merely retrieves the corresponding shader code from the pass and once again uses glsl-toolkit's merge-shader-sources to combine it with the existing shader.

That might be a bit much to put together in your head, so let's look at an example. We'll have an object that simply draws itself textured, and a shader pass that should apply a sort of fade out effect where things farther away become more transparent.

(define-shader-entity cube (vertex-entity textured-entity) ())

(define-shader-pass fade (render-pass) ())

(define-class-shader (fade :fragment-shader)
  "out vec4 color;
  
void main(){
  color.a *= (1 - gl_FragCoord.z);
}")

(defmethod setup-scene ((main main) scene)
  (enter (make-instance 'fade) scene)
  (enter (make-instance 'cube) scene))

Similar to how we did it in the previous entry, we define a new entity that is composed of vertices and has textured surfaces. I omitted some details like what vertex data and textures to use to make it simpler. Then we define a new shader pass, which is based on the existing render-pass that is a per-object-pass with a default color and depth output. Next we have to add the shader logic to our pass. Doing what we want is actually quite simple - we simply multiply the colour's alpha value with the depth value. The depth needs to be inverted, as by default 1 is far away and 0 is close by. Thus, with this shader, the farther away the fragment is, the more transparent it will now become.

Finally we define a method that hooks into the rest of Trial and establishes a scene to be drawn. We first enter a new instance of our fade pass into it, which will automatically get added to the underlying shader pipeline. We then add an instance of the cube to it, at which point several things will happen:

  1. The cube is added to the scene graph
  2. register-object-for-pass is called on the pass and cube instances.
  3. This will call determine-effective-shader-class on the cube. This function is used as a caching strategy to avoid duplicating shaders for classes that don't change anything from their superclass.
  4. If the determined class is not yet registered, make-pass-shader-program is called.
  5. make-pass-shader-program computes the effective shader definitions for each shader type.
  6. coerce-pass-shader is called for each type of shader definition that is used, and combines the object's shader source with that of the pass.
  7. The resulting shader program resource object is returned and stored in the pass.

The fragment shader produced by coerce-pass-shader will look something like this:

in vec2 texcoord;
out vec4 color;
uniform sampler2D texture_image;

void _GLSLTK_main_1(){
  color *= texture(texture_image, texcoord);
}

void _GLSLTK_main_2(){
  color.a *= (1 - gl_FragCoord.z);
}

void main(){
  _GLSLTK_main_1();
  _GLSLTK_main_2();
}

Again, this neatly combines the two effects into one source file that can be transparently used in the system. Once it comes to actually drawing the scene, something like the following course of events is going to take place:

  1. paint is called on the scene with the pipeline.
  2. paint-with is called on the fade pass with the scene.
  3. paint is called on the cube with the fade pass.
  4. shader-program-for-pass is called on the fade pass and cube. The previously cached program is retrieved and returned.
  5. The textured-entity part of the cube uses the shader program to set the texture uniform.
  6. The vertex-entity part of the cube uses the shader program to set the transformation matrices.
  7. The vertex-entity part of the cube calls the OpenGL vertex drawing function to finally run the rendering process.
  8. The color output from the fade pass is blitted onto the window for the user to marvel at.

And that is pretty much the entire process. Obviously some steps will be repeated as needed if you add more objects and passes. The pipeline system ensures that the passes are run in the correct order to satisfy dependencies. In case you're wondering about the paint and paint-with inversion, the latter function is useful so that the shader pass gets to have absolute control over what is rendered and how if it so desires. For instance, it could completely ignore the scene and draw something else entirely.

That about concludes the explanation of Trial's shader system as it stands today. As I've mentioned over these two entries there are a few minor places where improvements have to be made, but overall the design seems quite solid. At least I hope I won't suddenly realise a severe limitation somewhere down the line, forcing me to rethink and rewrite everything again.

Next I think I will take a bit of a break from talking about engine details, and instead talk about geometry clipmaps and the adventure I had trying to implement them. Until then~

Quicklisp newsQuicklisp implementation stats for 2017+

· 133 days ago
Here are some raw HTTP request stats for each CL implementation supported by Quicklisp from 2017-01-01 onward. Each request corresponds roughly to downloading a single project.
  • SBCL: 4,518,774 requests
  • Clozure CL: 488,057 
  • CLISP: 34,767
  • LispWorks: 25,700
  • ABCL: 23,913 
  • Allegro: 19,501
  • ECL: 19,027
  • CLASP: 3,335
  • CMUCL: 965
  • MKCL: 79
  • Scieneer: 0
I gathered this info to check Scieneer activity levels to plan future support. Turns out nobody with Scieneer has used Quicklisp since 2013. Bummer!

TurtleWarecl-charms crash course

· 136 days ago

Writing a new CLIM backend

This work is meant as a showcase how to write a new McCLIM backend. To make it more interesting to me I'm writing it using cl-charms library which is a Common Lisp library for ncurses - console manipulation library for UNIX systems. During development I'm planning to make notes about necessary steps. If possible I'll also write a test suite for backends which will test the functionality from most basic parts (like creating windows) to more sophisticated ones (transformations and drawing). That should simplify verifying, if a new backend works fine and to what degree it is complete. We start with a crash course for cl-charms library.

Pointers

cl-charms crash course

Ensure you have ncurses development package installed on your system. Start the real terminal (Emacs doesn't start *inferior-lisp* in something ncurses can work with) and launch your implementation. I use CCL because usually software is developed with SBCL and I want to catch as many problems with cl-charms as possible. After that start swank server and connect from your Emacs session.

~ ccl
? (defvar *console-io* *terminal-io*) ; we will need it later
*CONSOLE-IO*
? (ql:quickload 'swank :silent t)
(SWANK)
? (swank:create-server :port 4005 :dont-close t)
;; Swank started at port: 4005.
4005
? (loop (sleep 1))

We loop over sleep because we don't want console prompt to read our first line for the console. If you don't do that you may have somewhat confusing behavior.

In Emacs: M-x slime-connect *Host:* localhost *Port:* 4005. Now we are working in *slime-repl ccl* buffer in Emacs and we have ncurses output in the terminal we have launched server from. Try some demos bundled with the library:

CL-USER> (ql:quickload '(cl-charms bordeaux-threads alexandria))
(CL-CHARMS BORDEAUX-THREADS ALEXANDRIA)
CL-USER> (ql:quickload '(cl-charms-paint cl-charms-timer) :silent t)
(CL-CHARMS-PAINT CL-CHARMS-TIMER)
CL-USER> (charms-timer:main) ; quit with Q, start/stop/reset with [SPACE]
CL-USER> (charms-paint:main) ; quit with Q, move with WSAD and paint with [SPACE]

Now we will go through various charms (and ncurses) capabilities. Our final goal is to have a window with four buttons and text input box. Navigation should be possible with [TAB] / [SHIFT]+[TAB] and by selecting gadgets with a mouse pointer. Behold, time for the first application.

First application

Lets dissect this simple program which prints "Hello world!" on the screen:

(defun hello-world ()
  (charms:with-curses ()
    (charms:disable-echoing)
    (charms:enable-raw-input)
    (loop named hello-world
       with window = (charms:make-window 50 15 10 10)
       do (progn
            (charms:clear-window window)
            (charms:write-string-at-point window "Hello world!" 0 0)
            (charms:refresh-window window)

            ;; Process input
            (when (eql (charms:get-char window) #\q)
              (return-from hello-world))
            (sleep 0.1)))))

Program must be wrapped in charms:with-curses macro which ensures proper initialization and finalization of the program. In this operator context charms functions which configure the library are available. We use charms:disable-echoing to prevent unnecessary obfuscation of the window (we interpret characters ourself) and (charms:enable-raw-input) to turn off line buffering. charms:*standard-window* is a window covering whole terminal screen.

We create a Window for output (its size is 50x15 and offset is 10x10) and then in a loop we print "Hello world!" (at the top-left corner of it) until user press the character q.

Extending cl-charms API

All functions used until now come from higher-level interface. charms has also a low-level interface which maps to libncurses via CFFI. This interface is defined in the package named charms/ll. I highly recommend skimming through http://www.tldp.org/HOWTO/NCURSES-Programming-HOWTO which is a great overview of ncurses functionality.

We want borders around the window. CFFI interface is a bit ugly (i.e we would have to extract a window pointer to call wborder on it). We are going to abstract this with a function which plays nice with the lispy abstraction.

(defun draw-window-border (window
                           &optional;
                             (ls #\|) (rs #\|) (ts #\-) (bs #\-)
                             (tl #\+) (tr #\+) (bl #\+) (br #\+))
  (apply #'charms/ll:wborder (charms::window-pointer window)
         (mapcar #'char-code (list ls rs ts bs tl tr bl br))))

(defun draw-window-box (window &optional; (verch #\|) (horch #\-))
  (charms/ll:box (charms::window-pointer window) (char-code verch) (char-code horch)))

Now we can freely use draw-window-border. Put (draw-window-box window) after (charms:clear-window window) in hello-world program and see the result. It is ugly, but what did you expect from a window rendered in the terminal?

It is worth mentioning that border is drawn inside the window, so when we start writing string at point [0,0] - it overlaps with the border. If we want to paint content inside the border we should start at least at [1,1] and stop at [48,13].

Somewhat more appealing result may be achieved by having distinct window background instead of drawing a border with characters. To do that we need to dive into the low-level interface once more. We define colors API.

(defun start-color ()
  (when (eql (charms/ll:has-colors) charms/ll:FALSE)
    (error "Your terminal does not support color."))
  (let ((ret-code (charms/ll:start-color)))
    (if (= ret-code 0)
        T
        (error "start-color error ~s." ret-code))))

(eval-when (:load-toplevel :compile-toplevel :execute)
  (defconstant +black+   charms/ll:COLOR_BLACK)
  (defconstant +red+     charms/ll:COLOR_RED)
  (defconstant +green+   charms/ll:COLOR_GREEN)
  (defconstant +yellow+  charms/ll:COLOR_YELLOW)
  (defconstant +blue+    charms/ll:COLOR_BLUE)
  (defconstant +magenta+ charms/ll:COLOR_MAGENTA)
  (defconstant +cyan+    charms/ll:COLOR_CYAN)
  (defconstant +white+   charms/ll:COLOR_WHITE))

(defmacro define-color-pair ((name pair) foreground background)
  `(progn
     (start-color)
     (defparameter ,name (progn (charms/ll:init-pair ,pair ,foreground ,background)
                                (charms/ll:color-pair ,pair)))))

(define-color-pair (+white/blue+ 1) +white+ +blue+)
(define-color-pair (+black/red+ 2) +black+ +red+)

(defun draw-window-background (window color-pair)
  (charms/ll:wbkgd (charms::window-pointer window) color-pair))

(defmacro with-colors ((window color-pair) &body body)
  (let ((winptr (gensym)))
    (alexandria:once-only (color-pair)
      `(let ((,winptr (charms::window-pointer ,window)))
         (charms/ll:wattron ,winptr ,color-pair)
         ,@body
         (charms/ll:wattroff ,winptr ,color-pair)))))

start-color must be called when we configure the library. We map charm/ll constants to lisp constants and create define-color-pair macro. This abstraction could be improved so we are not forced to supply pair numbers and providing proper association between names and integers. We skip that step for brevity. Define two color pairs, function for filling a window background and macro with-colors for drawing with a specified palette. Finally lets use the new abstraction in pretty-hello-world function:

(defun pretty-hello-world ()
  (charms:with-curses ()
    (charms:disable-echoing)
    (charms:enable-raw-input)
    (start-color)
    (loop named hello-world
       with window = (charms:make-window 50 15 10 10)
       do (progn
            (charms:clear-window window)
            (draw-window-background window +white/blue+)
            (with-colors (window +white/blue+)
              (charms:write-string-at-point window "Hello world!" 0 0))
            (with-colors (window +black/red+)
              (charms:write-string-at-point window "Hello world!" 0 1))
            (charms:refresh-window window)

            ;; Process input
            (when (eql (charms:get-char window :ignore-error t) #\q)
              (return-from hello-world))
            (sleep 0.1)))))

Result looks, as promised in the function name, very pretty ;-)

Asynchronous input

Printing Hello world! doesn't satisfy our needs, we want to interact with a brilliant software we've just made while its running. Even more, we want to do it without blocking computations going on in the system (which are truly amazing, believe me). First lets visualise these computations to know that they are really happening. Modify the program loop to draw Hello World! in different color on each iteration.

(defun amazing-hello-world ()
  (charms:with-curses ()
    (charms:disable-echoing)
    (charms:enable-raw-input)
    (start-color)
    (loop named hello-world
       with window = (charms:make-window 50 15 10 10)
       for flip-flop = (not flip-flop)
       do (progn
            (charms:clear-window window)
            (draw-window-background window +white/blue+)
            (with-colors (window (if flip-flop
                                     +white/blue+
                                     +black/red+))
              (charms:write-string-at-point window "Hello world!" 0 0))
            (charms:refresh-window window)
            ;; Process input
            (when (eql (charms:get-char window :ignore-error t) #\q)
              (return-from hello-world))
            (sleep 1)))))

Something is not right. When we run amazing-hello-world to see it flipping - it doesn't. Our program is flawed. It waits for each character to verify that the user hasn't requested application exit. You press any key (i.e space) to proceed to the next iteration. Now we must think of how to obtain input from user without halting the application.

To do that we can enable non blocking mode for our window.

with window = (let ((win (charms:make-window 50 15 10 10)))
                (charms:enable-non-blocking-mode win)
                win)

This solution is not complete unfortunately. It works reasonably well, but we have to wait a second (because "computation" is performed every second, we call sleep after each get-char) before the character is handled. It gets even worse if we notice, that pressing five times character b and then q will delay processing by six seconds (characters are processed one after another in different iterations with one second sleep between them). We need something better.

I hear your internal scream: use threads! It is important to keep in mind, that if you can get without threads you probably should (same applies for cache and many other clever techniques which introduce even cleverer bugs). Keep also in mind that ncurses is not thread-safe. We are going to listen for events from all inputs like select does and generate "recompute" event each second. On implementation which support timers we could use them but we'll use... a thread to generate "ticks". Note that we use a thread as an asynchronous input rather than asynchronous charms access.

;;; asynchronous input hack (should be a mailbox!)
(defparameter *recompute-flag* nil "ugly and unsafe hack for communication")
(defvar *recompute-thread* nil)

(defun start-recompute-thread ()
  (when *recompute-thread*
    (bt:destroy-thread *recompute-thread*))
  (setf *recompute-thread*
        (bt:make-thread
         #'(lambda ()
             (loop
                (sleep 1)
                (setf *recompute-flag* t))))))

(defun stop-recompute-thread ()
  (when *recompute-thread*
    (bt:destroy-thread *recompute-thread*)
    (setf *recompute-thread* nil)))

In this snippet we create an interface to start a thread which sets a global flag. General solution should be a mailbox (or a thread-safe stream) where asynchronous thread writes and event loop reads from. We will settle with this hack though (it is a crash course not a book after all). Start recompute thread in the background before you start new application. Note, that this code is not thread-safe, we concurrently read and write to a global variable. We are also very drastic with bt:destroy-thread, something not recommended in any code which is not a demonstration like this one.

Time to refactor input and output functions: display-amazing-hello-world and get-amazing-hello-world-input.

(defun display-amazing-hello-world (window flip-flop)
  (charms:clear-window window)
  (draw-window-background window +white/blue+)
  (with-colors (window (if flip-flop
                           +white/blue+
                           +black/red+))
    (charms:write-string-at-point window "Hello world!" 0 0))
  (charms:refresh-window window))

(defun get-amazing-hello-world-input (window)
  (when *recompute-flag*
    (setf *recompute-flag* nil)
    (return-from get-amazing-hello-world-input :compute))
  (charms:get-char window :ignore-error t))

And finally improved application which takes asynchronous input without blocking.

(defun improved-amazing-hello-world ()
  (charms:with-curses ()
    (charms:disable-echoing)
    (charms:enable-raw-input)
    (start-color)
    (let ((window (charms:make-window 50 15 10 10))
          (flip-flop nil))
      (charms:enable-non-blocking-mode window)
      (display-amazing-hello-world window flip-flop)
      (loop named hello-world
         do (case (get-amazing-hello-world-input window)
              ((#\q #\Q) (return-from hello-world))
              (:compute (setf flip-flop (not flip-flop))
                        (display-amazing-hello-world window flip-flop))
              ;; don't be a pig to a processor
              (otherwise (sleep 1/60)))))))

When you are done with demo you may call stop-recompute-thread to spare your image unnecessary flipping a global variable.

Gadgets and input handling

So we have created an amazing piece of software which does the computation and reacts (instantaneously!) to our input. Greed is an amazing phenomena - we want more... We want interactive application - buttons and input box (allowing us to influence the amazing computation at run time).

First we define abstract class gadget.

(defparameter *active-gadget* nil)

;;; gadget should be type of `window' or `panel' - we are simplistic
(defclass gadget ()
  ((position :initarg :position :accessor gadget-position)))

(defgeneric display-gadget (window gadget &key &allow-other-keys)
  (:method ((window charms:window) (gadget gadget) &key)
    (declare (ignore window gadget))))

(defgeneric handle-input (gadget input &key &allow-other-keys)
  (:method (gadget input &key)
    (declare (ignore gadget input))))

In our model each gadget has at least position, display function and method for handling input. Both methods are gadget-specific with defaults doing nothing. We define also a parameter *active-gadget* which holds gadget receiving input.

handle-input returns T only if this input causes, that gadget has to be redisplayed. Otherwise it should return NIL (this is a small optimization which we will use later in main application loop).

Lets define something what we will use in our application.

(define-color-pair (+black/white+ 3) +black+ +white+) ; color for text-input (inactive)
(define-color-pair (+black/cyan+ 4) +black+ +cyan+)   ; color for text-input (active)
(defparameter *computation-name* "Hello world!")

(defclass text-input-gadget (gadget)
  ((buffer :initarg :buffer :accessor gadget-buffer)
   (width :initarg :width :reader gadget-width)))

(defun make-text-input-gadget (width x y)
  (make-instance 'text-input-gadget
                 :width width
                 :position (cons x y)
                 :buffer (make-array width
                                     :element-type 'character
                                     :initial-element #\space
                                     :fill-pointer 0)))

(defmethod display-gadget ((window charms:window) (gadget text-input-gadget) &key)
  (with-colors (window (if (eql gadget *active-gadget*)
                           +black/cyan+
                           +black/white+))
    (let ((background (make-string (gadget-width gadget) :initial-element #\space)))
      (destructuring-bind (x . y) (gadget-position gadget)
        (charms:write-string-at-point window background x y)
        (charms:write-string-at-point window (gadget-buffer gadget) x y)))))

(defmethod handle-input ((gadget text-input-gadget) input &key)
  (let ((buffer (gadget-buffer gadget)))
    (case input
      ((#\Backspace #\Rubout)
       (unless (zerop (fill-pointer buffer))
         (vector-pop buffer)))
      ((#\Return #\Newline)
       (unless (zerop (fill-pointer buffer))
         (setf *computation-name* (copy-seq buffer)
               (fill-pointer buffer) 0)))
      (#\ESC
       (setf (fill-pointer buffer) 0))
      (otherwise
       (when (ignore-errors (graphic-char-p input))
         (vector-push input buffer))))))

First gadget we define is text-input-gadget. What we need as its internal state is a buffer which holds text which is typed in the box. We care also about the string maximal width.

Moreover define colors for it to use in display-gadget (we depend on global parameter *active-gadget* what is a very poor taste). In display function we create a "background" (that wouldn't be necessary if it were a panel, abstraction defined in a library accompanying ncurses) and then at the gadget position we draw background and buffer contents (text which was already typed in text-input-gadget).

Second function handle-input interprets characters it receives and acts accordingly. If it is backspace (or rubout as on my keyboard with this terminal settings), we remove one element. If it is return (or newline) we change the computation name and empty input. escape clears the box and finally, if it is a character which we can print, we add it to the text-input (vector-push won't extend vector length so extra characters are ignored).

(defparameter *gadgets*
  (list (make-text-input-gadget 26 2 13)))

(defun display-greedy-hello-world (window flip-flop)
  (charms:clear-window window)
  (draw-window-background window +white/blue+)
  (with-colors (window (if flip-flop
                           +white/blue+
                           +black/red+))
    (charms:write-string-at-point window *computation-name* 2 1))
  (dolist (g *gadgets*)
    (if (eql g *active-gadget*)
        (display-gadget window g)
        (charms:with-restored-cursor window
          (display-gadget window g))))
  (charms:refresh-window window))

We maintain a list of gadgets which are displayed one-by-one in the window (after signalling, that computation is being performed). Previously we had hard coded "Hello world!" name but now we depend on variable *computation-name* which may be modified from the input box.

Each gadget is displayed but only *active-gadget* is allowed to modify cursor position. Other rendering is wrapped in charms:with-restored-cursor macro which does the thing name suggests. That means, that cursor will be position whenever *active-gadget* puts it (or if it doesn't modify its position - cursor will be at the end of the computation string).

(defun get-greedy-hello-world-input (window)
  (when *recompute-flag*
    (setf *recompute-flag* nil)
    (return-from get-greedy-hello-world-input :compute))
  (charms:get-char window :ignore-error t))

(defun greedy-hello-world ()
  (charms:with-curses ()
    (charms:disable-echoing)
    (charms:enable-raw-input)
    (start-color)
    (let ((window (charms:make-window 50 15 10 10))
          (flip-flop nil))
      (charms:enable-non-blocking-mode window)
      (display-greedy-hello-world window flip-flop)
      (catch :exit
        (loop
           do (let ((input (get-greedy-hello-world-input window)))
                (case input
                  (#\Dc1 ;; this is C-q
                   (throw :exit :c-q))
                  (#\Dc2 ;; this is C-r
                   (charms:clear-window charms:*standard-window*)
                   (charms:refresh-window charms:*standard-window*)
                   (display-greedy-hello-world window flip-flop))
                  (#\tab
                   (alexandria:if-let ((remaining (cdr (member *active-gadget* *gadgets*))))
                     (setf *active-gadget* (car remaining))
                     (setf *active-gadget* (car *gadgets*)))
                   (display-greedy-hello-world window flip-flop))
                  (:compute
                   (setf flip-flop (not flip-flop))
                   (display-greedy-hello-world window flip-flop))
                  (otherwise
                   (if (handle-input *active-gadget* input)
                       ;; redisplay only if handle-input returns non-NIL
                       (display-greedy-hello-world window flip-flop)
                       ;; don't be a pig to a processor
                       (sleep 1/60))))))))))

get-greedy-hello-world-input is fairly the same as get-amazing-hello-world-input for now. greedy-hello-world on the other hand handles input differently. For Quit we reserve C-q instead of q because we want to be able to type this character in the input box. We also add C-r sequence to refresh whole screen (just in case if we resize the terminal and some glitches remain). :compute is handledthe same way as it was previously. Finally if input is something else we feed it to the *active-gadget*. If handle-input returns something else than NIL we redisplay the application.

Note that if it is not redisplayed (i.e input is not handled because *active-gadget* was NIL or it didn't handle the input) we are a good citizen and instead of hogging the processor we wait 1/60 of a second. On the other hand if events come one by one we are legitimately busy so we skip this rest time and continue the event loop.

We don't have means to change which gadget is active yet. For that we have to add a new key which will be handled in the main event loop of the application #\tab. At least we wrap whole loop body in (catch :exit) to allow dynamic exit (instead of lexical one with block/return-from pair).

Start the application and make text-input-gadget active by pressing [tab]. Notice that its background changes. Now we have working input box where we can type new string and when we confirm it with [enter] string at the top will change.

We want more gadgets. For now we will settle with buttons.

(defclass button-gadget (gadget)
  ((label :initarg :label :reader gadget-label)
   (action :initarg :action :reader gadget-action)))

(defun make-button-gadget (text callback x y)
  (make-instance 'button-gadget :label text :action callback :position (cons x y)))

(defmethod display-gadget ((window charms:window) (gadget button-gadget) &key)
  (with-colors (window (if (eql gadget *active-gadget*)
                           +red/black+
                           +yellow/black+))
    (destructuring-bind (x . y) (gadget-position gadget)
      (charms:write-string-at-point window (gadget-label gadget) x y))))

(defmethod handle-input ((gadget button-gadget) input &key)
  (when (member input '(#\return #\newline))
    (funcall (gadget-action gadget))))

Each button is a gadget which (in addition to inherited position) has a label and the associated action. Active button label is drawn with yellow and red ink depending on its state. Handling input reacts only to pressing [enter]. When button action is activated gadget's action is funcall-ed (without arguments).

Modify *gadgets* parameter to have four buttons of ours and run the application (try pressing [tab] a few times to see that active widget is changing).

(defparameter *gadgets*
  (list (make-text-input-gadget 26 2 13)
        (make-button-gadget " Toggle " 'toggle-recompute-thread 30 11)
        (make-button-gadget "  Exit  " 'exit-application 40 11)
        (make-button-gadget " Accept " 'accept-input-box 30 13)
        (make-button-gadget " Cancel " 'cancel-input-box 40 13)))

(defun toggle-recompute-thread ()
  (if *recompute-thread*
      (stop-recompute-thread)
      (start-recompute-thread)))

(defun accept-input-box ()
  (handle-input (car *gadgets*) #\return))

(defun cancel-input-box ()
  (handle-input (car *gadgets*) #\esc))

(defun exit-application ()
  (throw :exit :exit-button))

We have created four buttons - Toggle starts/stops the "computation" thread which feeds us with input, Exit quits the application (that's why we have changed block to catch in greedy-hello-world main loop) and Accept / Cancel pass return and escape input to the text-input-gadget. Once again we prove that we lack a good taste because we aim at first element of *gadgets* parameter assuming, that first element is text input field.

The result looks a little like a user interface, doesn't it? Try activating various buttons and check out if text input works as desired. When you select Toggle and press [enter] computation label at the top will start / stop blinking in one second intervals.

Mouse integration

Our software has reached the phase where it is production ready. Investors crawl at our doorbell because they feel it is something what will boost their income. We are sophisticated and we have proposed a neat idea which will have a tremendous impact on the market. We have even created buttons and text-input gadgets which are the key to success. Something is still missing though. After a brainstorm meeting with the most brilliant minds of our decade we've came to a conclusion - what we miss is the mouse integration. That's crazy, I know. Mouse on a terminal? It is a heresy! Yet we believe that we have to take the risk if we want to outpace the competition. Roll up your sleeves - we are going to change the face of the modern UI design! ;-)

First we will start with abstracting things to make it easy to distribute events among gadgets. To do that we need to know where pointer event has happened. Lets assume that mouse event is composed of three parameters: button, x coordinate and y coordinate. Each gadget occupies some space on the screen (lets call it a region) which may be characterized by its bounding rectangle with [min-x, min-y] and [max-x, max-y] points.

(defgeneric bounding-rectangle (gadget)
  (:method ((gadget text-input-gadget))
    (destructuring-bind (x . y) (gadget-position gadget)
      (values x
              y
              (+ x -1 (gadget-width gadget))
              y)))
  (:method ((gadget button-gadget))
    (destructuring-bind (x . y) (gadget-position gadget)
      (values x
              y
              (+ x -1 (length (gadget-label gadget)))
              y))))

We could refactor button-gadget to have gadget-width method (that would simplify the implementation), but lets use what we have now. Having such representation of bounding rectangle we can now easily determine if cursor event occurred "over" the gadget or somewhere else.

(defun region-contains-position-p (gadget x y)
  (multiple-value-bind (x-min y-min x-max y-max)
      (bounding-rectangle gadget)
    (and (<= x-min x x-max)
         (<= y-min y y-max))))

Now distributing mouse event is as easy as iterating over all gadgets and verifying if event applies to the gadget. If it does we make such gadget active. Moreover if left mouse button is clicked we simulate [return] key to be handled by the previously defined handle-input method.

(defun distribute-mouse-event (bstate x y)
  (dolist (g *gadgets*)
    (when (region-contains-position-p g x y)
      (setf *active-gadget* g)
      (when (eql bstate charms/ll:button1_clicked)
        (handle-input g #\return))
      (return))))

Notice that we have used low-level charms interface (comparing bstate with charms/ll:button1_clicked). Other events are also defined in the charms/ll package so you may expand mouse handling interface.

Time to define display function for our enterprise application. ncurses cursor should to follow mouse movement, so displaying gadgets can't affect cursor position. To achieve that we wrap whole display function in charms:with-restored-cursor macro.

(defun display-enterprise-hello-world (window flip-flop)
  (charms:with-restored-cursor window
    (charms:clear-window window)
    (draw-window-background window +white/blue+)
    (if flip-flop
        (with-colors (window +white/blue+)
          (charms:write-string-at-point window *computation-name* 2 1))
        (with-colors (window +black/red+)
          (charms:write-string-at-point window *computation-name* 2 1)))
    (dolist (g *gadgets*) (display-gadget window g))
    (charms:refresh-window window)))

Usually terminal emulators doesn't report mouse movement (only clicks). To enable such reports print the following in the terminal output (note, that slime doesn't bind *terminal-io* to the right thing, so we use *console-io* which we have dfined at the beginning):

(format *console-io* "~c[?1003h" #\esc)

This escape sequence sets xterm mode for reporting any mouse event including mouse movement (see http://invisible-island.net/xterm/ctlseqs/ctlseqs.html). This escape sequence is usually honored by other terminal emulators (not only xterm). Without it we wouldn't be able to track mouse movement (only pointer press, release, scroll etc). This is important to us because we want to activate gadgets when mouse moves over them.

To start mouse and configure its events lets create initialization function. We configure terminal to report all mouse events including its position. After that we tell the terminal emulator to report mouse position events.

(defun start-mouse ()
  (charms/ll:mousemask
   (logior charms/ll:all_mouse_events charms/ll:report_mouse_position))
  (format *console-io* "~c[?1003h~%" #\esc))

We need to handle new event type :key-mouse (already handled types are C-q, C-r, TAB, :compute and "other" characterrs). Since event handling gets more complicated we factor it into a separate function enterprise-process-event. Note the handler-case - when mouse event queue is empty (for instance because the mouse event is masked), then function will return error. We also take into account that mouse events are reported starting at absolute terminal coordinates while our window starts at point [10,10]. Additionally we implement shift+tab sequence which on my system is reported as #\Latin_Small_Letter_S_With_Caron.

(defun get-enterprise-hello-world-input (window)
  (when *recompute-flag*
    (setf *recompute-flag* nil)
    (return-from get-enterprise-hello-world-input :compute))
  (let ((c (charms/ll:wgetch (charms::window-pointer window))))
    (when (not (eql c charms/ll:ERR))
      (alexandria:switch (c)
        (charms/ll:KEY_BACKSPACE #\Backspace)
        (charms/ll:KEY_MOUSE :KEY-MOUSE)
        (otherwise (charms::c-char-to-character c))))))

(defun enterprise-process-event (window flip-flop)
  (loop
     (let ((input (get-enterprise-hello-world-input window)))
       (case input
         (#\Dc1 ;; this is C-q
          (throw :exit :c-q))
         (#\Dc2 ;; this is C-r
          (display-enterprise-hello-world window flip-flop))
         (#\tab
          (alexandria:if-let ((remaining (cdr (member *active-gadget* *gadgets*))))
            (setf *active-gadget* (car remaining))
            (setf *active-gadget* (car *gadgets*)))
          (display-enterprise-hello-world window flip-flop))
         (#\Latin_Small_Letter_S_With_Caron ;; this is S-[tab]
          (if (eql *active-gadget* (car *gadgets*))
              (setf *active-gadget* (alexandria:lastcar *gadgets*))
              (do ((g *gadgets* (cdr g)))
                  ((eql *active-gadget* (cadr g))
                   (setf *active-gadget* (car g)))))
          (display-enterprise-hello-world window flip-flop))
         (:key-mouse
          (handler-case (multiple-value-bind (bstate x y z id)
                            (charms/ll:getmouse)
                          (declare (ignore z id))
                          ;; window starts at 10,10
                          (decf x 10)
                          (decf y 10)
                          (charms:move-cursor window x y)
                          (distribute-mouse-event bstate x y)
                          (display-enterprise-hello-world window flip-flop))
            (error () nil)))
         (:compute
          (setf flip-flop (not flip-flop))
          (display-enterprise-hello-world window flip-flop))
         (otherwise
          (if (handle-input *active-gadget* input)
              ;; redisplay only if handle-input returns non-NIL
              (display-enterprise-hello-world window flip-flop)
              ;; don't be a pig to a processor
              (sleep 1/60)))))))

To make terminal report mouse events in the intelligible way we need to call function charms:enable-extra-keys (thanks to that we don't deal with raw character sequences). We also call start-mouse.

(defun enterprise-hello-world ()
  (charms:with-curses ()
    (charms:disable-echoing)
    (charms:enable-raw-input)
    (start-color)
    (let ((window (charms:make-window 50 15 10 10))
          (flip-flop nil))
      ;; full enterprise ay?
      (charms:enable-non-blocking-mode window)
      (charms:enable-extra-keys window)
      (start-mouse)
      (display-enterprise-hello-world window flip-flop)
      (catch :exit
        (loop (funcall 'enterprise-process-event window flip-flop))))))

Our final result is splendid! We got rich :-). Source of the following asciinema video is located here.

asciicast

To finish our lisp session type in the slime REPL (quit).

Conclusions

In this crash course we have explored some parts of the cl-charms interface (windows, colors, mouse integration...) and defined an ad-hoc toolkit for the user interface. There is much more to learn and the library may be expanded (especially with regard to high level interface, but also to include supplement ncurses libraries like panel, menu and forms). Ability to run charms application in a new terminal started with run-program would be beneficial too.

Some programmers may have noticed that we have defined an ad-hoc, informally-specified, bug-ridden, slow implementation of 1/10th of CLIM. No wonder, it defines a very good abstraction for defining UI in a consistent manner and it is a standard (with full specification).

Instead of reinventing the wheel we want to plug into CLIM abstractions and use cl-charms as CLIM backend. This option will be explored in a near-term future - it will help to document a process of writing new backends. Of course such backend will be limited in many ways - that will be an opportunity to explore bugs which got unnoticed when the smallest distance is a pixel (in contrast to a reasonably big terminal character size). Stay tuned :-)

Zach BeanePlanet Lisp is back on twitter

· 136 days ago

Years ago, Hans Hübner created the @planet_lisp twitter account and made some Lisp software to keep it updated with recent posts from Planet Lisp.

But it stopped working last summer, and he offered to let someone else run it, so I wrote new software and it should be working again.

This post is half announcement, half a test to make sure the new gateway works.

Enjoy!

Nicolas HafnerIntegrating Shaders and Objects - Gamedev

· 138 days ago

header
In this entry I'll describe the way in which shaders are integrated into the object system in Trial. It serves as a predecessor to the next entry, which will be about shader pipelines.

In case you're not familiar with modern graphics programming, a shader is a piece of code that runs on the GPU during a particular stage in the rendering pipeline. The customisable part of the pipeline looks as follows:

Vertex (Tesselation Control) (Tesselation Evaluation) (Geometry) Fragment

In order to run anything at all, you need to provide shaders for the vertex and the fragment stages. The vertex stage emits vertices that form triangles. The tesselation stages can then subdivide these triangles further to add detail. The geometry stage can add entirely new triangles, for instance to produce grass. The triangles are then rasterised, and for each fragment (pixel) that is drawn the fragment shader is evaluated to produce the resulting colour.

Whenever you perform any drawing command, this pipeline is run. Which particular shaders are run depends on the shader program that is bound at the time of drawing. The shader program ties together shaders for the different stages to form the complete pipeline.

Different objects in the scene you're drawing will need different shaders to produce individual effects and change drawing behaviour. However, often times a large part of the shaders will be similar or the same. As such, it seems sensible to introduce some kind of system to allow sharing bits and pieces of shader logic between objects. I'd like to do some in-depth research into how other engines approach this problem and publish a paper on it at some point. For now, here's a brief summary of the strategies I've seen so far:

  • Special #include directives that are run through a pre-processor to splice other files in.
  • A "mega-shader" that includes everything, but sections are conditionalised through #ifdefs.
  • No strategy at all. People just copy paste things where they need them.

The drawback with the include strategy is that the included code won't have any idea about where it's included. It limits you to only being able to include function definitions that are fully encapsulated. The drawback with the mega-shader should be obvious: you're creating a huge mess of a file. Simply copy pasting should also be obviously a bad idea since that doesn't really share anything at all.

The other issue that all of these approaches have in common is that they do not integrate with the rest of the engine's entity system at all. In almost all cases there will be some kind of inheritance system to allow reusing and combining behaviour between common components. However, all of these combination approaches won't include shaders; they still have to be handled separately.

Trial solves this in two steps. The first step is the combination of shader parts, which is solved by glsl-toolkit. This library allows you to parse GLSL code into an AST, run static analysis on it, and use that to combine different code parts logically. Namely, it will recognise shared global variables and main functions, and will automatically rewrite the code to fit together properly. Naturally, there are limits to how well it can do this, but overall it has worked pretty well so far. There's other really interesting things that could be done with glsl-toolkit, like early error checking, inlining, cleanup, and so forth.

The second step is to use the MOP to tie shader code to classes. That way multiple inheritance can be used to combine shader parts as needed, alongside the other bits of logic that are naturally combined through methods and slots. When an instance of a class is finally created, all the shader parts can then be merged together to form a complete shader program.

In Trial this is implemented through the shader-entity class. Aside from a bunch of convenience method definitions and some MOP boilerplate, this is actually not that much code. Let's take a look at how the shaders are computed, which is the most substantial part of the file.

(defmethod compute-effective-shaders ((class shader-entity-class))
  (let ((effective-shaders ())
        (inhibited (inhibited-shaders class))
        (superclasses (remove 'shader-entity-class
                              (c2mop:compute-class-precedence-list class)
                              :test-not (lambda (type class) (typep class type)))))
    ;; Check whether inhibits are effective
    (loop for (name type) in inhibited
          for super = (find name superclasses :key #'class-name)
          do (cond ((not super)
                    (warn "No superclass ~s in hierarchy of ~s. Cannot inhibit its shader ~s." name (class-of super) (class-name class))
                    (setf (inhibited-shaders class) (remove (list name type) inhibited :test #'equal)))
                   ((not (getf (direct-shaders super) type))
                    (warn "No shader of type ~s is defined on ~s. Cannot inhibit it for ~s." type name (class-name class))
                    (setf (inhibited-shaders class) (remove (list name type) inhibited :test #'equal)))))
    ;; Compute effective inhibited list
    (loop for super in superclasses
          do (setf inhibited (append inhibited (inhibited-shaders super))))
    ;; Make all direct shaders effective
    (loop for (type shader) on (direct-shaders class) by #'cddr
          do (setf (getf effective-shaders type)
                   (list shader)))
    ;; Go through all superclasses in order
    (loop for super in superclasses
          do (loop for (type shader) on (direct-shaders super) by #'cddr
                   unless (find (list (class-name super) type) inhibited :test #'equal)
                   do (pushnew shader (getf effective-shaders type))))
    ;; Compute effective single shader sources
    (loop for (type shaders) on effective-shaders by #'cddr
          do (setf (getf effective-shaders type)
                   (glsl-toolkit:merge-shader-sources
                    (loop for (priority shader) in (stable-sort shaders #'> :key #'first)
                          collect (etypecase shader
                                    (string shader)
                                    (list (destructuring-bind (pool path) shader
                                            (pool-path pool path))))))))
    (setf (effective-shaders class) effective-shaders)))

It might look a bit intimidating at first, but it's not too complicated. First we compute a list of superclasses that are also shader-entity-classes. Then we go through the list of inhibition specs, which allow you to prevent certain shaders from being inherited. This can be useful if you want to slightly alter the way the shader is implemented without having to also rewrite the other parts of functionality that come with inheritance. When going through the inhibitions, we check whether they are even applicable at all, to give the user a bit of error checking. We then combine the inhibition specs with those of all the superclasses. This is done because we always compute a fresh effective shader from all direct shaders on the superclasses, so we need to preserve inhibition from superclasses as well. Next we push all the shaders that were defined directly on the class to the effective shaders. Then we go through all the superclasses in order and, unless their shader was inhibited, push it onto the effective shaders. Finally, for each shader stage we combine all the shader parts using glsl-toolkit's merge-shader-sources. The merging is done in the order of priority of the shaders. Adding a priority number to shaders allows you to bypass the natural order by inheritance, and instead make sure your code gets included at a particular point. This can be important because merge semantics change depending on the order.

Let's look at an example of how this can be useful. I will omit code that isn't directly related to the shaders. If you want to have a closer look at how it all works, check the helpers.lisp file.

(define-shader-entity vertex-entity () ())

(define-class-shader (vertex-entity :vertex-shader)
  "layout (location = 0) in vec3 position;

uniform mat4 model_matrix;
uniform mat4 view_matrix;
uniform mat4 projection_matrix;

void main(){
  gl_Position = projection_matrix * view_matrix * model_matrix * vec4(position, 1.0f);
}")

The vertex-entity is a base class for almost all entities that have a vertex array that should be drawn to the screen. It provides a single vertex shader that does the basic vertex translation from model space to clip space. In case you're not familiar with these terms, model space means that vertex positions are relative to the model's origin. Clip space means vertex positions are relative to the OpenGL viewport (window). If we ever have an object that draws a model in some way, this class is now a good candidate for inclusion as a superclass.

(define-shader-entity colored-entity () ())

(define-class-shader (colored-entity :fragment-shader)
  "uniform vec4 objectcolor;
out vec4 color;

void main(){
  color *= objectcolor;
}")

This class is a bit smaller and simply gives you the ability to colour the resulting fragments uniformly. Note however that we don't simply set the output colour to our object's colour, but instead multiply it. This means that this class, too, can be combined with others.

(define-shader-entity textured-entity () ())

(define-class-shader (textured-entity :vertex-shader)
  "layout (location = 1) in vec2 in_texcoord;
out vec2 texcoord;

void main(){
  texcoord = in_texcoord;
}")

(define-class-shader (textured-entity :fragment-shader)
  "in vec2 texcoord;
out vec4 out_color;
uniform sampler2D texture_image;

void main(){
  out_color *= texture(texture_image, texcoord);
}")

Finally, the textured-entity class provides you with the ability to stick textures onto your vertices. To do so it needs another attribute that describes the texture coordinates for each vertex and, using the vertex shader, passes these on down the line to the fragment shader. There it does a lookup into the texture and, as with the previous entity, multiplies the output colour with that of the texture's.

Now for the interesting bit: if we want to create an object that is drawn using a vertex array, is uniformly textured, and should be tinted with a particular colour, we can simply do this:

(define-shader-entity funky-object (vertex-entity colored-entity textured-entity) ())

When we finalise the inheritance on this class, all the shaders it inherits are combined to produce single shaders as we would expect:

TRIAL> (effective-shaders 'funky-object)
(:VERTEX-SHADER "#version 330 core
layout(location = 1) in vec2 in_texcoord;
out vec2 texcoord;

void _GLSLTK_main_1(){
  texcoord = in_texcoord;
}
layout(location = 0) in vec3 position;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
uniform mat4 projection_matrix;

void _GLSLTK_main_2(){
  gl_Position = (projection_matrix * (view_matrix * (model_matrix * vec4(position, 1.0))));
}

void main(){
  _GLSLTK_main_1();
  _GLSLTK_main_2();
}"
:FRAGMENT-SHADER "#version 330 core
out vec4 color;

void _GLSLTK_main_1(){
  color = vec4(1.0, 1.0, 1.0, 1.0);
}
in vec2 texcoord;
uniform sampler2D texture_image;

void _GLSLTK_main_2(){
  color *= texture(texture_image, texcoord);
}
uniform vec4 objectcolor;

void _GLSLTK_main_3(){
  color *= objectcolor;
}

void main(){
  _GLSLTK_main_1();
  _GLSLTK_main_2();
  _GLSLTK_main_3();
}")

As you can see, the individual main functions got renamed and coupled, and specifications for in and out variables that denoted the same thing got merged together. The code perhaps isn't structured as nicely as one would like, but there's always room for improvements.

This inheritance and combination of shader parts is pretty powerful. While it will probably not allow you to produce very efficient shaders, this should not be a big concern for an overwhelmingly large class of projects, especially during development. Instead, this "Lego blocks" approach to shaders and graphics functionality allows you to be very productive in the early phases, just to hammer things out.

With that covered, you should be ready for the next part, where I will go into how rendering is actually performed, and how more complicated effects and functions are composed. Hopefully I'll have that out in a couple of days.

Quicklisp newsDownload stats for January, 2018

· 138 days ago
Here are the raw Quicklisp download stats for January 2018.
20452  alexandria
17822 closer-mop
16236 cl-ppcre
15928 split-sequence
15762 babel
15131 iterate
14929 anaphora
14859 trivial-features
14580 let-plus
14136 trivial-gray-streams
13668 bordeaux-threads
12524 cffi
12215 trivial-garbage
11815 flexi-streams
11383 puri
11201 more-conditions
11063 nibbles
10266 usocket
9545 utilities.print-items
9499 trivial-backtrace
9312 esrap
9170 cl+ssl
8940 chunga
8812 cl-base64
8446 chipz
8417 cl-fad
8107 drakma
7212 cl-yacc
6937 named-readtables
6929 asdf-flv
6647 fiveam
6429 local-time
6179 ironclad
6103 parse-number
5850 closure-common
5844 cxml
5743 log4cl
5229 architecture.hooks
5037 cl-json
5023 parser.common-rules
4773 plexippus-xpath
4530 optima
4323 lift
4028 lparallel
4002 cl-clon
3882 cl-dot
3594 cxml-stp
3469 cl-interpol
3421 xml.location
3399 cl-unicode
3363 utilities.print-tree
3350 cl-store
3311 fare-utils
3305 fare-quasiquote
3135 slime
3109 inferior-shell
3098 fare-mop
2810 metabang-bind
2557 trivial-utf-8
2556 trivial-types
2544 cl-utilities
2288 uuid
2244 cl-annot
2234 cl-syntax
2232 quri
2128 trivial-indent
2118 documentation-utils
2035 cl-slice
2027 plump
2023 array-utils
2013 gettext
1998 static-vectors
1983 symbol-munger
1966 collectors
1933 arnesi
1932 md5
1920 access
1915 djula
1898 cl-locale
1893 cl-parser-combinators
1833 fast-io
1822 simple-date-time
1804 hunchentoot
1567 ieee-floats
1537 yason
1364 prove
1312 asdf-system-connections
1307 metatilities-base
1304 cl-containers
1192 osicat
1138 monkeylib-binary-data
1115 rfc2388
1041 trivial-shell
993 diff
989 postmodern
961 cl-custom-hash-table
929 parse-float
912 salza2
882 cl-sqlite
872 trivial-mimes


For older items, see the Planet Lisp Archives.


Last updated: 2018-06-15 21:00