Annually-Funded Developers' Update: January & February 2026

By Kathy Davis

Hello Fellow Clojurists!

This is the first of six reports from the developers who are receiving annual funding for 2026. We’ve also added in the final FastMath report from Thomas Clark (Q3 2025 project). There is a lot of great work here - so have fun exploring!

Bozhidar Batsov: nREPL, Clojure Mode, info-Clojure, CIDER, drawbridge
Clojure Camp: Badges, Parson’s Problems, Mobs
Eric Dallo: eca, clojure-lsp, metrepl
Jeaye Wilkerson: Jank
Michiel Borkent: SCI, babashka, Cream, clj-kondo, squint, and more

Thomas Clark:FastMath


Bozhidar Batsov

2026 Annual Funding Report 1. Published March 3, 2026.

The period was extremely productive and solid progress was made on almost every front. CIDER nad nREPL saw important releases, but so did also:

I’ve also did some work on updating REPLy to use jline 3 and tools.reader (instead of the abandoned sjacket)
Below are some highlights from the releases.

nREPL

Release: 1.6.0 (Feb 26)

New Features

Bug Fixes

Internal Improvements (January)

Documentation Overhaul (February)
A massive documentation push covering:

Housekeeping


clojure-mode 5.21.0 (Feb 18) + some ongoing work for the next release

I did the biggest updates of clojure-mode in years in the past month. This might sound odd, given that clojure-ts-mode has been in a great shape for a while now, but I felt that in the spirit of Clojure’s stability we shouldn’t force people to change their workflows if they are happy with them. And I understand that no everyone likes the complexity of using TreeSitter (or can run a new Emacs). So, I’ve decided to tackle the issues that seemed most important and I think the final result was pretty good. Some of the changes below are unreleased, but will be released soon, as everything’s merged already and I haven’t received any reports of regressions.

New Features

Bug Fixes (many long-standing)

Housekeeping


inf-clojure 3.4.0 (Feb 27)

inf-clojure rarely gets much love from me, but the project has been in a good shape for a while now anyways. Still, I felt that an annual cleanup and bug-fixing session was in order and I hope you’ll appreciate it. I’ve also tried to restructure the docs to be easier to follow and I finally added some comparison with CIDER.

New Features

Bug Fixes

Housekeeping


CIDER 1.21.0 “Gracia” (Feb 7)

Last, but never least… :D I did’t do that much with CIDER for this release, as I wanted to focus on nREPL, clojure-mode and inf-clojure, but I think it’s still turned out pretty well.

After the release I’ve introduced the concept of default sessioncider-set-default-session / cider-clear-default-session to bypass sesman’s project-based dispatch (#3865). This was in the back of my mind for years

I’ve also spend some time cleaning up internals, improving the CI and the docs. A lot more CIDER improvements are currently brewing. :-)
As you can imagine - I have many ideas on what to tackle next, so I hope the next couple of months will be just as exciting and productive.

Thanks to everyone for your support! You rock!


Clojure Camp

2026 Annual Funding Report 1. Published March 7, 2026.


Eric Dallo

2026 Annual Funding Report 1. Published March 8, 2026.

Starting 2026 with so much energy! In these first 2 months I’ve been working hard on multiple projects, with most of the focus on ECA which is growing really fast with more and more people using it and contributing, we reached 0.109.0 with lots of features and improvements! Besides that, I worked on clojure-lsp, clojure-mcp, brepl and more, really happy with the progress and thankful for ClojuristsTogether sponsorship! :heart:

eca

ECA keeps growing with lots of new features, bug fixes, and improvements in stability and compatibility. In these 2 months we had lots of releases with some really exciting features, the changelog is huge but here are the highlights:

That’s really a lot of things done, showing how users are excited with the project and asking for new features, ECA is in a really good shape after 6 months, closer to Claude code, Cursor and other tools, but free and more extensible!

Also, we have a new webpage for eca.dev!

Dalle website image

ECA editor plugins

All ECA editor plugins received significant updates to keep up with the new ECA server features:

clojure-lsp

We had a big release with lots of dependency bumps that were long overdue, new code actions, and important bug fixes. The Extract Function code action got much more powerful with selection and threading support! I have plans for next months to have custom code actions, memory management, performance and classpath scan improvements!

2026.02.20-16.08.58

metrepl

0.5.1 - 0.5.2


Jeaye Wilkerson

2026 Annual Funding Report 1. Published March 6, 2026.

Hey folks! We’re two months into the year and I’d like to cover all of the progress that’s been made on jank so far. Before I do that, I want to say thank you to all of my Github sponsors, as well as Clojurists Together for sponsoring this whole year of jank’s development!

jank book

To kick things off, let me introduce the jank book. This will be the recommended and official place for people to learn jank and its related tooling. It’s currently targeted at existing Clojure devs, but that will start to shift as jank matures and I begin to target existing native devs as well. The jank book is written by me, not an LLM. If you spot any issues, or have any feedback, please do create a Github Discussion.

My goals for this book include:

  1. Introduce jank’s syntax and semantics
  2. Introduce jank’s tooling
  3. Walk through some small projects, start to finish
  4. Demonstrate common use cases, such as importing native libs, shipping AOT artifacts, etc.
  5. Show how to troubleshoot jank and its programs, as well as where to get help
  6. Provide a reference for error messages

As the name and technology choice implies, the jank book is heavily inspired by the Rust book.

Alpha status

jank’s switch to alpha in January was quiet. There were a few announcements made by others, who saw the commits come through, or who found the jank book before I started sharing it. However, I didn’t make a big announcement myself since I wanted to check off a few more boxes before getting the spotlight again. In particular, I spent about six weeks, at the end of 2025 and into January, fixing pre-mature garbage collection issues. These weeks will be seared into my memory for all of my days, but the great news is that all of the issues have now been fixed. jank is more and more stable every day, as each new issue improves our test suite.

LLVM 22

On the tail of the garbage collection marathon, the eagerly awaited LLVM 22 release happened. We had been waiting for LLVM 22 to ship for several months, since it would be the first LLVM version which would have all of jank’s required changes upstreamed. The goal was that this would allow us to stop vendoring our own Clang/LLVM with jank and instead rely on getting it from package managers. This would make jank easier to distribute and, crucially, make jank-compiled AOT programs easier to distribute. You can likely tell from my wording that this isn’t how things went. LLVM 22 arrived with a couple of issues.

Firstly, some data which we use for very important things like loading object files, adding LLVM IR modules to the JIT runtime, interning symbols, etc was changed to be private. This can happen because the C++ API for Clang/LLVM is not considered a public API and thus is not given any stability guarantees. I have been in discussions with both Clang and LLVM devs to address these issues. They are aware of jank and want to support our use cases, but we will need to codify some of our expectations in upstreamed Clang/LLVM tests so that they are less likely to be broken in the future.

Secondly, upon upgrading to LLVM 22, I found two different performance regressions which basically rendered debug builds of jank unusable on Linux (here and here). Our startup time for jank debug builds went from 1 second to 1 minute and 16 seconds. The way jank works is quite unique. This is what allows us to achieve unprecedented C++ interop, but it also stresses Clang/LLVM in ways which are not always well supported. I have been working with the relevant devs to get these issues fixed, but the sad truth is that the fixes won’t make it into LLVM 22. That means we’ll need to wait several more months for LLVM 23 before we can rely on distro packages which don’t have this issue.

That’s a tough pill to swallow, so I took a week or so to rework the way we do codegen and JIT compilation. I’ve not only optimized our approach, but I’ve also specifically crafted our codegen to avoid these slower parts of LLVM. This not only brings us back to previous speeds, it makes jank faster than it was before. Once LLVM 23 lands, the fixes for those issues will optimize things further.

So, if you’ve been wondering why I’ve been quiet these past few months, I likely had my head buried deep into one of these problems. However, with these issues out of the way, let’s cover all of the other cool stuff that’s been implemented.

nREPL server

jank has an nREPL server now! You can read about it in the relevant jank book chapter. One of the coolest parts of the nREPL server is that it’s written in jank and yet also baked into jank, thanks to our two-phase build process. The nREPL server has been tested with both NeoVim/Conjure and Emacs/CIDER. There’s a lot we can do to improve it, going forward, but it works.

As Clojure devs know, REPL-based development is revolutionary. To see jank’s seamless C++ interop combined with the tight iteration loop of nREPL is beautiful. Here’s a quote from an early jank nREPL adopter, Matthew Perry:

The new nREPL is crazy fun to play around with. Works seamlessly with my editor (NeoVim + Conjure). It’s hard to describe the experience of compiling C++ code interactively - I’m so used to long edit-compile-run loops and debuggers that it feels disorienting (in a good way!)

A huge shout out to Kyle Cesare, who originally wrote jank’s nREPL server back in August 2025. Thank you for your pioneering! If you’re interested in helping out in this space, there’s still so much to explore, so jump on in.

C++ interop improvements

Most of my other work on jank has been related to improving C++ interop.

Referred globals

jank now allows for C/C++ includes to be a part of the ns macro. It also follows ClojureScript’s design for :refer-global, to bring native symbols into the current namespace. Without this, the symbols can still be accessed via the special cpp/ namespace.

(ns foo
  (:include "gl/gl.h") ; Multiple strings are supported here.
  (:refer-global :only [glClear GL_COLOR_BUFFER_BIT])) ; Also supports :rename.

(defn clear! []
  (glClear GL_COLOR_BUFFER_BIT))

Native loop bindings

jank now supports native loop bindings. This allows for loop bindings to be unboxed, arbitrary native values. jank will ensure that the native value is copyable and supports operator=. This is great for looping with C++ iterators, for example.

(loop [i #cpp 0]
  (if (cpp/== #cpp 3 i)
    (cpp/++ i)
    (recur (cpp/++ i))))]

There’s more work to be done to automatically use unboxed values and use native operators, when possible. For now it’s opt-in only.

Unsafe casting

jank had the equivalent of C++’s static_cast, in the form of cpp/cast. However, for some C/C++ APIs, unsafe casting is necessary. To accomplish this, jank now has cpp/unsafe-cast, which does the equivalent of a C-style cast.

(let [vga-memory (cpp/unsafe-cast uint16_t* #cpp 0xB8000)]
  )

Type/value DSL

This one is working, but not yet in main. jank now supports encoding C++ types via a custom DSL. With this DSL, we can support any C++ type, regardless of how complex. That includes templates, non-type template parameters, references, pointers, const, volatile, signed, unsigned, long, short, pointers to members, pointers to functions, and so on. The jank book will have a dedicated chapter on this once merged, but here’s a quick glimpse.

C++ jank

A normal C++ map template instantiation.

std::map<std::string, int*>
(std.map std.string (ptr int))

A normal C++ array template instantiation.

std::array<char, 64>::value_type
(:member (std.array char 64) value_type)

A sized C-style array.

unsigned char[1024]
(:array (:unsigned char) 1024)

A reference to an unsized C-style array.

unsigned char(&)[]
(:& (:array (:unsigned char)))

A pointer to a C++ function.

int (*)(std::string const &)
(:* (:fn int [(:& (:const std.string))]))

A pointer to a C++ member function.

int (Foo::*)(std::string const &)
(:member* Foo (:fn int [(:& (:const std.string))]))

A pointer to a C++ member which is itself a pointer to a function.

void (*Foo::*)()
(:member* Foo (:* (:fn void [])))

This type DSL will be enabled automatically in type position for cpp/new, cpp/cast, cpp/unsafe-cast, cpp/unbox, and so on. It can also be explicitly introduced via cpp/type, in case you want to use it in value position to construct a type or access a nested value. For example, to dynamically allocate a std::map<int, float>, you could do:

(let [heap-allocated (cpp/new (std.map int float))
      stack-allocated ((cpp/type (std.map int float)))]
  )

Other improvements

jank will now defer JIT compilation of functions, when possible. In some scenarios, such as during AOT compilation, this can cut compile times in half. We do this by generating a stub object which will JIT compile the relevant code when it’s first called. It understands vars, too, so it will replace itself in its containing var when called so that subsequent calls through the var just go to the JIT compiled function. JVM folks happily don’t need to worry about these sorts of things, but we can have nice things, too.

Also, jank’s object model has been opened up. I previously documented my research into an efficient object model. Over the past couple of years of hammock time, I have found an approach which allows for JIT-defined objects while still avoiding the costs of C++’s runtime type information (RTTI). This is worthy of its own post entirely, which I will likely do once the transition is complete. For now, we have most of our code still using the old model while some of it is using the new model. This is great, though, since it allows us to port piece by piece while keeping everything in main. The main outcome of opening up the object model is that jank users can define their own jank objects which integrate well into the system, can be stored within jank data structures, and used with jank functions.

Finally, to better support nREPL, jank added support for clojure.core/future. This required an audit of all synchronization across the jank compiler and runtime. Now, we should be in a good place from which to build multi-threaded jank applications. Tools like Clang’s thread sanitizer will help ensure we stay there.

What’s next

In March, I am wrapping up work on the type DSL and getting that merged. I also need to investigate why the Arch binary package for jank is broken. Beyond that, I will be starting into some deep performance research for jank. That will mean first collecting a series of benchmarks for jank versus Clojure and then profiling and optimizing those benchmarks as needed. I would really like to get some continuous benchmarking set up, so we can track performance over time, tied to particular commits. The current plan is to spend all of Q2 on performance, but there’s a lot to do, so I won’t be able to tackle everything. Benchmark optimization posts are often quite fun, so stay tuned for the next one!


Michiel Borkent

2026 Annual Funding Report 1. Published March 6, 2026.

In this post I’ll give updates about open source I worked on during January and February 2026. To see previous OSS updates, go here.

Sponsors

I’d like to thank all the sponsors and contributors that make this work possible. Without you, the below projects would not be as mature or wouldn’t exist or be maintained at all! So a sincere thank you to everyone who contributes to the sustainability of these projects.

gratitude

Current top tier sponsors:

Open the details section for more info about sponsoring.

Sponsor info

If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!

Babashka conf and Dutch Clojure Days 2026

Babashka Conf 2026 is happening on May 8th in the OBA Oosterdok library in Amsterdam! David Nolen, primary maintainer of ClojureScript, will be our keynote speaker! We’re excited to have Nubank, Exoscale, Bob and Itonomi as sponsors. Wendy Randolph will be our event host / MC / speaker liaison :-). The CfP is now closed. More information here. Get your ticket via Meetup.com (there is a waiting list, but more places may become available). The day after babashka conf, Dutch Clojure Days 2026 will be happening, so you can enjoy a whole weekend of Clojure in Amsterdam. Hope to see many of you there!

Projects

I spent a lot of time making SCI’s deftype, case, and macroexpand-1 match JVM Clojure more closely. As a result, libraries like riddley, cloverage, specter, editscript, and compliment now work in babashka.

After seeing charm.clj, a terminal UI library, I decided to incorporate JLine3 into babashka so people can build terminal UIs. Since I had JLine anyway, I also gave babashka’s console REPL a major upgrade with multi-line editing, tab completion, ghost text, and persistent history. A next goal is to run rebel-readline + nREPL from source in babashka, but that’s still work in progress (e.g. the compliment PR is still pending).

I’ve been working on async/await support for ClojureScript (CLJS-3470), inspired by how squint handles it. I also implemented it in SCI (scittle, nbb etc. use SCI as a library), though the approach there is different since SCI is an interpreter.

Last but not least, I started cream, an experimental native binary that runs full JVM Clojure with fast startup using GraalVM’s Crema. Unlike babashka, it supports runtime bytecode generation (definterface, deftype, gen-class). It currently depends on a fork of Clojure and GraalVM EA, so it’s not production-ready yet.

Here are updates about the projects/libraries I’ve worked on in the last two months in detail.

Contributions to third party projects:

Other projects

These are (some of the) other projects I’m involved with but little to no activity happened in the past month.

Click for more details


FastMath: Thomas Clark

Q3 2025 Funding Report 2. Published Feb. 6, 2026.

Table of Contents

  1. Overview
  2. New Design
  3. New API (and implementations)
  4. Outlook

Overview

In the second term of this project, I refined and finalised the overall protocol structure for mathematical, representational and predicate operations, particularly confirming the split between what is ’necessary’ and what is ’extra’ when identifying and manipulating mathematical objects. Using this, I created several more protocols and implemented more rigorous testing namespaces - involving multiple tiers - and leaning on another Clojurists Together project for generating references: Wolframite. By introducing flexible constructor namespaces for both (real and complex) matrices and numbers, as well as multimethod-based type definitions, I solidified the overall design into a generalised API. Such an API now facilitates different layers of use: allowing for operator overloading, domain promotion and variadic arguments across objects as standard, but leaving the concrete implementations separate and available for when speed really matters.

Overall then, this work completes the funding period, successfully implementing the fastmath matrix protocol for complex matrices and extending this to a generalised API. There is however, still much to do, some of which really has to be done soon for these extensions to be practically useful. Below, I expand on what was done during this period: with a discussion of the overall design, some illustrative examples of the API and a reflection on the overall status and the necessary and hopeful next steps.

New Design

Although an enthusiastic consumer of parts of fastmath previously, it only became apparent to me, in the first part of the project, just how vast the library is. This, combined with the notion that a complex matrix API necessarily requires interaction with complex numbers (and that complex matrix implementation requires a consistent real matrix implementation), naturally led to the need to consider wider interaction problems and so wider design concerns. As highlighted in the midterm report, it became clear that, for the library to continue to expand organically, the implementation details would need to be abstracted, so that we can swap alternative backends in and out more easily in the future. This created contradictions however.

The contradictions arose as the demands for a user-friendly (and consistent) API, orients the solution towards having all of the compatible methods in the same namespace - who wants to import 20 nses to work an a single linear algebra problem? A decomplected (non-hierarchical) protocol structure however, mirrors mathematical implementations well, allows for maximal code reuse and seems to fit with Clojure’s design philosophy. Furthermore, some hierarchies are good and necessary and yet these cannot be done using Clojure protocols. My initial attempt in the first term approximated this with aliasing ’lower’ (level of abstraction) protocols within higher ones, but trying to balance this with generalised (non implementation dependent) constructors led to cyclical dependencies.

I settled in the end, therefore, for a complete separation of concerns, with protocols expanding modularly in the same way (there are now about 30), the introduction of separate, generalised constructors for numbers and matrices - that worked regardless of domain - and type implementations that were explicitly coupled to underlying libraries.

The ’user friendliness’ was then implemented entirely separately using a new, generalised, library-level, version-controlled API that exposes the object protocols and constructors in meaningful ways, but independent of the implementation. This allows the ’backends’ to be changed simply by changing the constructor dependencies. On this layer, I then explicitly implemented the linear algebra generalization, using multimethods, such that matrices and numbers can interact consistently, subject to variadic operators, with automatic domain promotion.

It is perhaps also worth mentioning, in addition to practical things like systematic naming conventions (see forthcoming documentation), that in order to integrate the new features with the existing library, I also started to abstract operations outside of the mathematical and physical needs. Fastmath is primarily used within the SciCloj/Noj ecosystem, and so ’representational’ protocols were introduced to model matrix access as a two dimensional computing structure (table), rather than a mathematical object. This helps to push the library towards integration with the wider ecosystem, with a long term goal of tighter consistency with tablecloth, tableplot and adjacent libraries.

New API (and implementations)

As a quick illustration of the APIs, consider the process of creating and working with complex numbers with pure protocols for efficiency.

(require '[fastmath.api.v2.algebra.complex :as C])
  (C/i 1.0)
;; #object[fastmath.algebra.object.number.complex.ejml.ComplexNumber 0x38306cba "1.00000+0.000000i"]

(C/add (C/i 1 2) (C/i 3 4))
;; #object[fastmath.algebra.object.number.complex.ejml.ComplexNumber 0x7454ba0 "4.00000+6.000000i"]
(C/negate (C/i 1 2))
;; #object[fastmath.algebra.object.number.complex.ejml.ComplexNumber 0x27ce1f84 "-1.00000-2.000000i"]
(C/norm (C/i 1 2))
;; 2.23606797749979

Creating matrices consistently.

  (def m--r (mat/diagonal (repeat 3 5)))
 ;; #object[fastmath.algebra.object.matrix.rectangular.real.ejml.RealDense 0x4a419cba nil]
;; :shape [3 3] :type float64
;; [[5.000 0.000 0.000]
;;  [0.000 5.000 0.000]
;;  [0.000 0.000 5.000]]
(mat/<-real ... )
;; #object[fastmath.algebra.object.matrix.rectangular.complex.ejml.ComplexDense 0x3913463b nil]
;; :shape [3 3] :type float64
;; [[5.000 0.000 0.000]
;;  [0.000 5.000 0.000]
;;  [0.000 0.000 5.000]]
(def m--c (mat/<-coll 3 3 (partition 2 (range 18))))
;; #object[fastmath.algebra.object.matrix.rectangular.complex.ejml.ComplexDense 0x9a42bb7 nil]
;; :shape [3 3] :type complex128
;; [[0.000+1.000000i   2.000+3.000000i   4.000+5.000000i  ]
;;  [6.000+7.000000i   8.000+9.000000i   10.000+11.000000i]
;;  [12.000+13.000000i 14.000+15.000000i 16.000+17.000000i]]
(mat/<-rows [[10.0 1.0  -5.0]
             [0.0 4.0  2.789]
             [-5 31 8]])
;; #object[fastmath.algebra.object.matrix.rectangular.real.ejml.RealDense 0x3c9a1815 nil]
;; :shape [3 3] :type float64
;; [[10.000 1.000  -5.000]
;;  [0.000  4.000  2.789 ]
;;  [-5.000 31.000 8.000 ]]
(mat/<-rows [[(i 78 0.0) (i 0.0 77) (i 16.13456 56)]
             [(i 9134 -341) (i 24 2341) (i 10 -56)]])
;; #object[fastmath.algebra.object.matrix.rectangular.complex.ejml.ComplexDense 0x113c5d0b nil]
;; :shape [2 3] :type complex128
;; [[78.000+0.000000i     0.000+77.000000i     16.135+56.000000i   ]
;;  [9134.000-341.000000i 24.000+2341.000000i  10.000-56.000000i   ]]

And mixing it altogether with domain promotion, general symbols and variadic arguments.

(refer-clojure :exclude [+ -])
(require '[fastmath.api.v2.algebra.general :as alg :refer [+ *]])

(+ 5 (C/i 1 2) (mat/identity 2))
;; #object[fastmath.algebra.object.matrix.rectangular.complex.ejml.ComplexDense 0x61223e14 nil]
;; :shape [2 2] :type complex128
;; [[7.000+2.000000i 6.000+2.000000i]
;;  [6.000+2.000000i 7.000+2.000000i]]

(+ m--c m--r m--r m--c)
;; #object[fastmath.algebra.object.matrix.rectangular.complex.ejml.ComplexDense 0x4f72c29b nil]
;; :shape [3 3] :type complex128
;; [[10.000+2.000000i  4.000+6.000000i   8.000+10.000000i ]
;;  [12.000+14.000000i 26.000+18.000000i 20.000+22.000000i]
;;  [24.000+26.000000i 28.000+30.000000i 42.000+34.000000i]]

Outlook

As can be seen above, it is now possible to interleave fundamental matrix and numeric operations without overt concern for domain. This is simply scratching the surface of fastmath’s potential however. At the end of the first term, it had already become clear that the original scope of the project was too large to be completed within the given timeframe, and so all work had to be done with future extension in mind.

There are some immediate next steps however, that ideally would already have been completed. These are to coordinate the integration of this branch with generateme, to finalize the optimization and clay documentation and to publicize subsequent performance metrics. I hope to complete these promptly.

Medium-term goals would then see expansion of the integration of this extension with other fastmath features. For, as discussed, fastmath is a large library and although the core matrix protocol has been implemented, there are many other functions that apply not just to matrices in particular but which could benefit from a unified numeric tower of mathematical operations. Other features within reach are also the exploitation of EJML’s parallelism for longer calculations, as well as an alternative implementaton using ojAlgo.

Following this, I would also like to develop explicit links between fastmath and other libraries. The lowest-hanging fruit would be to officially support fastmath-wolframite interop. Having already used Wolframite to provide tests and having already worked on the Wolframite library itself, this should be relatively straightforward - and would allow easier delegation from fastmath to Wolframite (and return) for currently unimplemented algorithms. Similarly, as part of my long-term goal to integrate various Clojure libraries towards physics research, I would like to make interop with emmy a smooth reality. Potentially, even using fastmath as an implementation for backend calculations. Likewise, and more easily than for emmy, I would like to see libraries like qclojure use fastmath for their quantum implementations.

Overall then, although I would have liked to get further in the time available, I am excited by this experiment in the suitability of Clojure for general mathematics. I hope that the library, even just the algebraic namespace protocols themselves, provide a basis for future work in a wide array of applications and I hope that, having read this little report, you are inspired to try it out in your own work. Please don’t hesitate to get in touch if that’s the case.