Pages

Monday, June 12, 2017

Tagsistant

"Tagsistant is a semantic file system for Linux, a personal tool to catalog files using tags (labels, mnemonic informations) rather than directories.

Tagsistant replace the concept of directory with that of tag, but since it have to do with directories, it pairs the concept of tag with that of directory. So within Tagsistant a tag is a directory and a directory is a tag.

To be more precise, this is not true with all the directories. First level directories are special. The one called tags/ hosts all the tags. This means that every directory created inside it is infact a tag.

Another, called store/, hosts contents, like files. All the tags created inside tags/ are available inside store/. To tag a file all you have to do is to copy it inside one or more directories under store/.

Another special first level directory is relations/. Inside it you can establish relations between tags using mkdir."

https://github.com/StrumentiResistenti/Tagsistant

Sultan

"Sultan is a Python package for interfacing with command-line utilities, like yum, apt-get, or ls, in a Pythonic manner. It lets you run command-line utilities using simple function calls.

Sultan allows you to interface with command-line utilities from Python without having to write your scripts in Bash.

Sultan was designed because Bash just does not scale well with most projects. As much as we can try, Bash does not have a proper package management system. So, we’re left with script files that are sourced, and called, but it gets quiet complicated as soon as you have more than 10 scripts.

Python has a great package and module system that allows us to create complex projects using the Python language, but also leverage a lot of great tools and functionality that we’ve grown to love and expect from Bash."

https://sultan.readthedocs.io/en/latest/

Wednesday, June 7, 2017

IRPF90

"IRPF90 is a Fortran90 preprocessor written in Python for programming using the Implicit Reference to Parameters (IRP) method. It simplifies the development of large fortran codes in the field of scientific high performance computing.

IRPF90 is a Fortran programming environment which helps the development of large Fortran codes by applying the Implicit Reference to Parameters method (IRP).

In Fortran programs, the programmer has to focus on the order of the instructions: before using a variable, the programmer has to be sure that it has already been computed in all possible situations. For large codes, it is common source of error.

In IRPF90 most of the order of instructions is handled by the pre-processor, and an automatic mechanism guarantees that every entity is built before being used. This mechanism relies on the {needs/needed by} relations between the entities, which are built automatically.

Codes written with IRPF90 execute often faster than Fortran programs, are faster to write and easier to maintain."

https://github.com/scemama/irpf90

http://irpf90.ups-tlse.fr/index.php?title=IRPF90

https://scemama.gitbooks.io/irpf90/

CFortranTranslator

"A translator from Fortran90/Fortran77(ISO/IEC 1539:1991) to C++.

Fortran is an efficient tool in scientific calculation. However sometimes translating old fortran codes to C++ will enable more programming abstraction, better GUI framework support, higher performance IDE and easier interaction.

This translator is not intended to improve existing codes, but to make convenience for those who need features of C++ and remain fortran traits as much as possible."

https://github.com/CalvinNeo/CFortranTranslator

BLIS

"BLIS is a portable software framework for instantiating high-performance BLAS-like dense linear algebra libraries. The framework was designed to isolate essential kernels of computation that, when optimized, immediately enable optimized implementations of most of its commonly used and computationally intensive operations. BLIS is written in ISO C99 and available under a new/modified/3-clause BSD license. While BLIS exports a new BLAS-like API, it also includes a BLAS compatibility layer which gives application developers access to BLIS implementations via traditional BLAS routine calls.

For a thorough presentation of our framework, please read our recently accepted journal article, "BLIS: A Framework for Rapidly Instantiating BLAS Functionality". For those who just want an executive summary, please see the next section.

In a follow-up article, "The BLIS Framework: Experiments in Portability", we investigate using BLIS to instantiate level-3 BLAS implementations on a variety of general-purpose, low-power, and multicore architectures.

An IPDPS'14 conference paper titled "Anatomy of High-Performance Many-Threaded Matrix Multiplication" systematically explores the opportunities for parallelism within the five loops that BLIS exposes in its matrix multiplication algorithm.

It is our belief that BLIS offers substantial benefits in productivity when compared to conventional approaches to developing BLAS libraries, as well as a much-needed refinement of the BLAS interface, and thus constitutes a major advance in dense linear algebra computation. While BLIS remains a work-in-progress, we are excited to continue its development and further cultivate its use within the community.

BLIS offers several advantages over traditional BLAS libraries:
  • Portability that doesn't impede high performance. Portability was a top priority of ours when creating BLIS. With zero additional effort on the part of the developer, BLIS is configurable as a fully-functional reference implementation. But more importantly, the framework identifies and isolates a key set of computational kernels which, when optimized, immediately and automatically optimize performance across virtually all level-2 and level-3 BLIS operations. In this way, the framework acts as a productivity multiplier. And since the optimized (non-portable) code is compartmentalized within these few kernels, instantiating a high-performance BLIS library on a new architecture is a relatively straightforward endeavor.
  • Generalized matrix storage. The BLIS framework exports interfaces that allow one to specify both the row stride and column stride of a matrix. This allows one to compute with matrices stored in column-major order, row-major order, or by general stride. (This latter storage format is important for those seeking to implement tensor contractions on multidimensional arrays.) Furthermore, since BLIS tracks stride information for each matrix, operands of different storage formats can be used within the same operation invocation. By contrast, BLAS requires column-major storage. And while the CBLAS interface supports row-major storage, it does not allow mixing storage formats.
  • Full support for the complex domain. BLIS operations are developed and expressed in their most general form, which is typically in the complex domain. These formulations then simplify elegantly down to the real domain, with conjugations becoming no-ops. Unlike the BLAS, all input operands in BLIS that allow transposition and conjugate-transposition also support conjugation (without transposition), which obviates the need for thread-unsafe workarounds. Also, where applicable, both complex symmetric and complex Hermitian forms are supported. (BLAS omits some complex symmetric operations, such as symv, syr, and syr2.)
  • Advanced multithreading support. BLIS allows multiple levels of symmetric multithreading for nearly all level-3 operations. (Currently, users may choose to obtain parallelism via either OpenMP or POSIX threads). This means that matrices may be partitioned in multiple dimensions simultaneously to attain scalable, high-performance parallelism on multicore and many-core architectures. The key to this innovation is a thread-specific control tree infrastructure which encodes information about the logical thread topology and allows threads to query and communicate data amongst one another. BLIS also employs so-called "quadratic partitioning" when computing dimension sub-ranges for each thread, so that arbitrary diagonal offsets of structured matrices with unreferenced regions are taken into account to achieve proper load balance.
  • Ease of use. The BLIS framework, and the library of routines it generates, are easy to use for end users, experts, and vendors alike. An optional BLAS compatibility layer provides application developers with backwards compatibility to existing BLAS-dependent codes. Or, one may adjust or write their application to take advantage of new BLIS functionality (such as generalized storage formats or additional complex operations) by calling BLIS directly. BLIS's interfaces will feel familiar to many veterans of BLAS since BLIS exports APIs with BLAS-like calling sequences. And experts will find BLIS's internal object-based APIs a delight to use when customizing or writing their own BLIS operations. (Objects are relatively lightweight structs and passed by address, which helps tame function calling overhead.)
  • Multilayered API and exposed kernels. The BLIS framework exposes its implementations in various layers, allowing expert developers to access exactly the functionality desired. This layered interface includes that of the lowest-level kernels, for those who wish to bypass the bulk of the framework. Optimizations can occur at various levels, in part thanks to exposed packing and unpacking facilities, which by default are highly parameterized and flexible.
  • Functionality that grows with the community's needs. As its name suggests, the BLIS framework is not a single library or static API, but rather a nearly-complete template for instantiating high-performance BLAS-like libraries. Furthermore, the framework is extensible, allowing developers to leverage existing components to support new operations as they are identified. If such operations require new kernels for optimal efficiency, the framework and its APIs will be adjusted and extended accordingly.
  • Code re-use. Auto-generation approaches to achieving the aforementioned goals tend to quickly lead to code bloat due to the multiple dimensions of variation supported: operation (i.e. gemm, herk, trmm, etc.); parameter case (i.e. side, [conjugate-]transposition, upper/lower storage, unit/non-unit diagonal); datatype (i.e. single-/double-precision real/complex); matrix storage (i.e. row-major, column-major, generalized); and algorithm (i.e. partitioning path and kernel shape). These "brute force" approaches often consider and optimize each operation or case combination in isolation, which is less than ideal when the goal is to provide entire libraries. BLIS was designed to be a complete framework for implementing basic linear algebra operations, but supporting this vast amount of functionality in a manageable way required a holistic design that employed careful abstractions, layering, and recycling of generic (highly parameterized) codes, subject to the constraint that high performance remain attainable.
  • A foundation for mixed domain and/or mixed precision operations. BLIS was designed with the hope of one day allowing computation on real and complex operands within the same operation. Similarly, we wanted to allow mixing operands' floating-point precisions, or both domain and precision. Unfortunately, this feature results in a significant amount of additional code, mostly in level-2 and lower operations, thus, it is disabled by default. However, mixing domains in level-3 operations is possible, in theory, with almost no additional effort on the part of the library developer, and such operations would remain capable of high performance. (Please note that this functionality is still highly experimental and should be thought of as a feature that will be more thoroughly implemented at some future date.)
https://github.com/amd/blis

PyCall.jl

"This package provides the ability to directly call and fully interoperate with Python from the Julia language. You can import arbitrary Python modules from Julia, call Python functions (with automatic conversion of types between Julia and Python), define Python classes from Julia methods, and share large data structures between Julia and Python without copying them."

https://github.com/JuliaPy/PyCall.jl

ezCU

"ezCU is a C/C++ and Fortran wrapper that makes it easier to use CUDA for scientific computing. Writing a CUDA code evolves including multiple lines of host code, which often requires additional efforts and coding skills, especially in the case of big projects with lots of legacy code. In scientific computing, host code is usually cumbersome in such a manner scientists would spend more time putting together the host code rather than focusing on accelerating their workloads on GPUs (or other CUDA capable hardware, if any).

ezCU extensively reduces the need to focus on the host code and offers a set of functionalities in C/C++ and Fortran to help efficiently exploit hardware accelerators for scientific computing.

ezCU offers a transparent way to manage memory objects on different hardware accelerators with different memory models thanks to a set of bitwise flags."

https://github.com/issamsaid/ezCU

https://github.com/issamsaid/hiCL

PyRsw

"PyRsw is a python solver of the Rotating Shallow Water Model. This is an approximate model that can be derived from the full Bossinessq equations in the limit of constant density and strictly hydrostatic motions.

It is more general than the one-layer quasigeostrophic model in that it allows for variable Rossby numbers and can include gravity waves. It is a useful model to study the oceans, the atmosphere and other planetary systems as well.

We hope that this will be of interest both to students and researchers in the field. The aim is to create examples that can illustrate the basic physical processes and documentation that explains it. Then the user can modify it to study other processes.

PyRsw is a work in progress and we greatly value peoples comments and contributions.

Presently, PyRsw is strictly pseudo-spectral but there are plans of including a Finite Volume option.
PyRws is threaded if you have pyfftw installed. If this is not present than the fft’s from numpy will be used instead. Extending this to include mpi would make things run even faster."

https://github.com/PyRsw/PyRsw

autovpn

"autovpn is a tool to automatically connect you to a random VPN in a country of your choice. It uses openvpn to connect you to a server obtained from VPN Gate."

https://github.com/adtac/autovpn

http://www.vpngate.net/en/

https://openvpn.net/

Climate

"Climate is the ultimate command line tool for Linux. It provides a huge number of command line options for developers to automate their Linux system. This tool can be extremely helpful to learn various unix commands too. There is an option to print each command before they're executed to help you memorize them over time.

https://github.com/adtac/climate

The commands are:

Command Description
climate help show this help and exit
climate update update your climate install
climate uninstall uninstall climate :(
climate version display the version and credits


climate weather [location] get the weather


climate battery display remaining battery
climate sleep put computer to sleep
climate lock lock computer
climate shutdown [minutes] shutdown the computer
climate restart restart the computer
climate time show the time
climate clock [remove] put a console clock in the top right corner
climate countdown <seconds> a countdown timer
climate stopwatch a stopwatch
climate ix pipe output to ix.io


climate biggest-files [path] find the biggest files recursively
climate biggest-dirs [path] find the biggest directories
climate dir-size [path] find directory size
climate remove-empty-dirs [path] remove empty directories
climate extract <file> [path] extract any given archive
climate search <text> [ext] search for the given pattern recursively
climate find-duplicates [path] report duplicate files in a directory
climate count <file> <string> count the number of occurences
climate replace <text> <replacement> [ext] replace all occurences
climate monitor <file> monitor file for changes
climate ramfs <size> <path> create a ramfs of size (in MB) at path


climate speedtest test your network speed
climate local-ip retrieve your local ip address
climate is-online verify if you're online
climate public-ip retrieve your public ip address
climate ports list open ports
climate hosts edit the hosts file
climate http-server [port] http-server serving the current directory
climate is-up <domain> determine if server is up
climate httpstat <url> visualizes curl statistics with httpstat
climate ipinfo [ip] lookup IP with ipinfo.io API


climate download-file <file> download file from server
climate download-dir <dir> download dir from server
climate upload <path> <remote-path> upload to server
climate ssh-mount <remote-path> mount a remote path
climate ssh-unmount <local-path> unmount a ssh mount


climate undo-commit undo the latest commit
climate reset-local reset local repo to match remote
climate pull-latest sync local with remote
climate list-branches list all branches
climate repo-size calculate the repo size
climate user-stats <name> calculate total contribution for a user


climate overview display an performance overview
climate memory find memory used
climate disk find disk used
climate get-pids <process> get all PIDs for a process name
climate trash-size find the trash size
climate empty-trash empty the trash

https://github.com/adtac/climate






















































PSI

"PSI is a solver which takes as input a probabilistic program and automatically computes the exact probability density function (PDF) represented by that program. To compute the PDF, PSI uses symbolic inference techniques. It handles probabilistic programs with both discrete and continuous distributions. PSI assumes that loops in the probabilistic program are of finite length. PSI's ability to compute exact PDFs is in contrast to almost all existing techniques which are approximate in nature (these use sampling or variational methods).

The PDF computed by PSI can be used for a variety of queries, including computing the probability of assertions, expectations, performing inference, computing differences, quantifying the loss of precision, and others. Because of this, PSI is a useful platform for encoding applications from various domains that require probabilistic reasoning."

http://psisolver.org/

https://github.com/eth-srl/psi

Tuesday, June 6, 2017

fimex

"Fimex is a library and a program to convert gridded geospatial data between different formats and projections."

https://github.com/metno/fimex

Pynamical

 Python package for modeling, simulating, visualizing, and animating discrete nonlinear dynamical systems and chaos.

Pynamical uses pandas, numpy, and numba for fast simulation, and matplotlib for beautiful visualizations and animations to explore system behavior. Compatible with Python 2+3.

You can read/cite the journal article about pynamical: Boeing, G. 2016. "Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction." Systems, 4 (4), 37. doi:10.3390/systems4040037.

Nearly all nontrivial real-world systems are nonlinear dynamical systems. Chaos describes certain nonlinear dynamical systems that have a very sensitive dependence on initial conditions. Chaotic systems are always deterministic and may be very simple, yet they produce completely unpredictable and divergent behavior. Systems of nonlinear equations are difficult to solve analytically, and scientists have relied heavily on visual and qualitative approaches to discover and analyze the dynamics of nonlinearity. Indeed, few fields have drawn as heavily from visualization methods for their seminal innovations: from strange attractors, to bifurcation diagrams, to cobweb plots, to phase diagrams and embedding. Although the social sciences are increasingly studying these types of systems, seminal concepts remain murky or loosely adopted. This article has three aims. First, it argues for several visualization methods to critically analyze and understand the behavior of nonlinear dynamical systems. Second, it uses these visualizations to introduce the foundations of nonlinear dynamics, chaos, fractals, self-similarity and the limits of prediction. Finally, it presents Pynamical, an open-source Python package to easily visualize and explore nonlinear dynamical systems’ behavior."

https://github.com/gboeing/pynamical

OSMnx

"OSMnx is a Python 2+3 package that lets you download spatial geometries and construct, project, visualize, and analyze street networks from OpenStreetMap's APIs. Users can download and construct walkable, drivable, or bikable urban networks with a single line of Python code, and then easily analyze and visualize them.

In a couple lines of code you can examine intersection density, network circuity, average block size, PageRank, betweenness centrality, connectivity, spatial distribution of dead-ends or 4-way intersections, etc for anywhere in the world.

You can just as easily download and work with building footprints, elevation data, and network routing.

OSMnx lets you download street network data and build topologically corrected street networks, project to UTM and plot the networks, and save the street network as SVGs, GraphML files, or shapefiles for later use. The street networks are directed and preserve one-way directionality. API responses are cached locally so OSMnx doesn't have to request the same data from the API multiple times, saving bandwidth and increasing speed.
You can download a street network by providing OSMnx any of the following (demonstrated in the examples below):
  • a bounding box
  • a lat-long point plus a distance (either distance along the network, or cardinal)
  • an address plus a distance (either distance along the network, or cardinal)
  • a place name or list of place names (for OSMnx to automatically geocode and get the boundary of)
  • a polygon of the desired street network's boundaries
You can also specify several different network types:
  • drive - get drivable public streets (but not service roads)
  • drive_service - get drivable streets, including service roads
  • walk - get all streets and paths that pedestrians can use (this network type ignores one-way directionality)
  • bike - get all streets and paths that cyclists can use
  • all - download all non-private OSM streets and paths
  • all_private - download all OSM streets and paths, including private-access ones
For example, to download, construct, project, and plot Manhattan's drivable street network:

G = ox.graph_from_place('Manhattan, New York, USA', network_type='drive')
ox.plot_graph(ox.project_graph(G))
 
For an in-depth demonstration of creating street networks, see this notebook."

https://github.com/gboeing/osmnx/

Grafoscopio

"Grafoscopio is a moldable tool for interactive documentation and data visualization, that is being used in citizen, garage & open science, reproducible research, (h)ac(k)tivism, open & community innovation, domain specific visualization and data journalism, and has a lot of other potential uses. Grafoscopio is covered by a Free Libre Open Source Software license (MIT) and it has an associated hackathon/workshop, called the Data Week, oriented to citizen concerns, which are related and mediated by data and visualization. There we learn, adapt and feedback this tool.
Grafoscopio is and integrates simple and self-cointained "pocket infractures", that can be execute On/Off-line, from a USB thumb drive, a rasperry-Pi alike computer, a modest server or any hardware in between and beyond. 


Grafoscopio tries to create interactive documents which are modeled as trees. Trees structure the document and provide it with sequence and hierarchy, and allow the authors write in a non-linear fashion. Trees can have basically two kind of tagged nodes: documentation ones (by default or untagged) or code ones (tagged as "code"). Inside documentation nodes you write in pandoc's markdown. Inside code nodes you write in smalltalk and they're executable playgrounds, with code completion, data visualization and inspection capabilities and everything you will expect of a live code environment. This approach brings interactive documentation inside the Pharo image and to output to the external world the tree is traversed and processed according to the tags and document is exported as a markdown document. There you use the pandoc toolkit to produce the desired output, which mostly consist in a one liner command to select your output format.

Tagged nodes have other benefits, for example, you can mark some nodes as invisible, to be ignored by the traverser or extend your tag set to create more finer control over the behavior of the document. They can also work as an ad-hoc emergent language, so you can query the document and use the same tree to address different audiences, while keeping consistency and unity.

Grafoscopio tries to become an simple, understandable, moldable, versatile and flexible tool thanks to the power of Pharo Smalltalk ecosystem and the combination with mature external and internal frameworks and tools. It uses:
  • Internal
    • GT Tools and Spec for embeddable Moose playgrounds, GUI and interactive nodes.
    • Roassal for data visualization.
    • STON for a light data storage and notebooks format.
    • Fuel: For medium data storage and objects serialization.
  • External:
    • Fossil SCM for collaboration and traceability of the documents history.
    • SQLite for storage and management of tabular data.
    • Pandoc for exporting to pdf/printing and html/web formats.
http://mutabit.com/grafoscopio/index.en.html

Dataviz

"Dataviz, is a companion package for Grafoscopio that puts together several examples of Domain Specific Visualizations and Domain Specific Languages (DSV, DSL, respectively) developed with the Roassal agile visualization engine. Included prototypes approach several themes like: Panama Papers as reproducible Research, Twitter Data Selfies and published medicine information access and are presented using blog post and/or internal interactive documentation, via Grafoscopio notebooks. The classes in the package and their documentation show several levels of maturity from pretty mature (Panama Papers) to early coder practices (Infomed) and on going developments (Twitter Data Selfies) and are offered as examples and excersises to learners in our recurrent Data Week workshop+hackathon, for code testing and refactoring."

http://mutabit.com/repos.fossil/grafoscopio/doc/tip/Packages/Dataviz/intro.md

Pharo

"Pharo is a pure object-oriented programming language and a powerful environment, focused on simplicity and immediate feedback (think IDE and OS rolled into one).

Pharo's goal is to deliver a clean, innovative, free and open-source immersive environment. Here is the to be revised Pharo vision document.

By providing a stable and small core system, excellent developing tools, and maintained releases, Pharo is an attractive platform to build and deploy mission critical applications.

Pharo fosters a healthy ecosystem of both private and commercial contributors who advance and maintain the core system and its external packages.

The features include:
  • A dynamic, pure object-oriented programming language in the tradition of Smalltalk
  • An IDE (integrated development environment)
  • A huge library and set of external packages
http://pharo.org/

http://catalog.pharo.org/

http://smalltalkhub.com/

http://mutabit.com/grafoscopio/index.en.html



Thursday, June 1, 2017

Functional Geometry

"This notebook is a collection of preliminary notes about a "code camp" (or a series of lectures) aimed at young students inspired by the fascinating Functional Geometry paper of Peter Henderson.

In such work the Square Limit woodcut by Maurits Cornelis Escher is reconstructed from a set of primitive graphical objects suitably composed by means of a functional language.

Here the approach will be somehow different: first of all because our recipients will be students new to computer science (instead of fellow researchers), but also because besides recalling the fundamental concepts of abstraction levels (and barriers), primitives and composition, present in the original paper, we will here also take the opportunity to introduce some (albeit to some extent elementary) considerations on algebra and geometry, programming and recursion (and perhaps discuss some implementation details).

This work is to be considered very preliminary, it is not yet structured in a series of lectures, nor it is worked out the level at which every topic is to be presented, according to the age (or previous knowledge) of the students. The language and detail level used here is intended for instructors and teachers, and the various topics will be listed as mere hints, not yet as a viable and ready to use syllabus.

As a last remark, before actually beginning with the notes, the code of this notebook is very loosely derived from previous "implementations" of Functional Geometry such as Shashi Gowda's Julia version and Micah Hahn's Hasjell version (containing the Bézier curve description of the Escher fish used here). I decided to rewrote such code in a Jupyter notebook written in Python 3, a simple and widespread language, to make it easier for instructors to adopt it.

The source notebook is available on GitHub (under GPL v3), feel free to use issues to point out errors, or to fork it to suggest edits."

https://mapio.github.io/programming-with-escher/

Tad

"Tad is a free (MIT Licensed) desktop application for viewing and analyzing tabular data.

A fast CSV file viewer that works on large files. CSV data is imported into SQLite for fast, accurate processing.

It's a Pivot Table for analyzing and exploring data in SQL data sources.

Designed to fit in to the workflow of data engineers and data scientists."

http://tadviewer.com/

Cuis

"Cuis is a free Smalltalk-80 environment originally derived from Squeak with a specific set of goals: being simple and powerful. It is also portable to any platform, fast and efficient. This means it is a great tool for running on any hardware, ranging from RasPis and the like and phones, up to cloud servers, and everything in between, including regular laptops and PCs.
Cuis is
  • Small
  • Clean
  • Appropriable
Like Squeak, Cuis is also:
  • Open Source
  • Self contained
  • Multiplatform
Like other Smalltalk-80 environments (including Squeak, Pharo and others), Cuis is also:
  • A complete development environment written in itself
  • A pure, dynamic Object Oriented language
Cuis shares the OpenSmalltalk Virtual Machine with Squeak, Pharo and Newspeak. What sets Cuis apart from the rest of the Squeak family is the focus on Smalltalk-80 and an active attitude towards system complexity.

We follow a set of ideas that started with Jean Piaget's Constructivism, and later explored in Seymour Papert's Mathland. These lead to Alan Kay's Learning Research Group's Personal Computer for Children of All Ages, Personal Dynamic Media, i.e. the Dynabook and then to Smalltalk-80. To us, a Smalltalk system is a Dynabook. A place to experiment and learn, and a medium to express the knlowledge we acquire. We understand software development as the activity of learning and documenting knowledge, for us and others to use, and also to be run on a computer. The fact that the computer run is useful, is a consequence of the knowldege being sound and relevant. (Just making it run is not the important part!)

Cuis Smalltalk is our attempt at this. Furthermore, we believe we are doing something else that no other Smalltalk, commercial or open source, does. We attempt to give the true Smalltalk-80 experience, and keep Smalltalk-80 not as legacy software to be run in an emulator, but as a live, evolving system. We feel we are the real keepers of Smalltalk-80, and enablers of the Dynabook experience.
Cuis is continuously evolving towards simplicity. Each release is better (i.e. simpler) than the previous one. At the same time, features are enhanced, and any reported bugs fixed. We also adopt recent enhancements from Squeak."

https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev

http://cuis-smalltalk.org/

TLA+

"TLA+ (pronounced as tee ell a plus, /ˈt ɛl plʌs/) is a formal specification language developed by Leslie Lamport. It is used to design, model, document, and verify concurrent systems. TLA+ has been described as exhaustively-testable pseudocode[4] and blueprints for software systems.[5]

For design and documentation, TLA+ fulfills the same purpose as informal technical specifications. However, TLA+ specifications are written in a formal language of logic and mathematics, and the precision of specifications written in this language is intended to uncover design flaws before system implementation is underway.[6]

Since TLA+ specifications are written in a formal language, they are amenable to finite model checking. The model checker finds all possible system behaviours up to some number of execution steps, and examines them for violations of desired invariance properties such as safety and liveness. TLA+ specifications use basic set theory to define safety (bad things won't happen) and temporal logic to define liveness (good things eventually happen).

TLA+ is also used to write machine-checked proofs of correctness both for algorithms and mathematical theorems. The proofs are written in a declarative, hierarchical style independent of any single theorem prover backend. Both formal and informal structured mathematical proofs can be written in TLA+; the language is similar to LaTeX, and tools exist to translate TLA+ specifications to LaTeX documents.[7]

TLA+ was introduced in 1999, following several decades of research into a verification method for concurrent systems. A toolchain has since developed, including an IDE and distributed model checker. The pseudocode-like language PlusCal was created in 2009; it transpiles to TLA+ and is useful for specifying sequential algorithms. TLA+2 was announced in 2014, expanding language support for proof constructs. The current TLA+ reference is The TLA+ Hyperbook by Leslie Lamport."

https://en.wikipedia.org/wiki/TLA%2B

http://lamport.azurewebsites.net/tla/tla.html

https://pron.github.io/posts/tlaplus_part1

Tectonic

"Tectonic is a modernized, complete, self-contained TeX/LaTeX engine, powered by XeTeX and TeXLive.

The features include:
  • Tectonic automatically downloads support files so you don’t have to install a full LaTeX system in order to start using it. If you start using a new LaTeX package, Tectonic just pulls down the files it needs and continues processing. The underyling “bundle” technology allows for completely reproducible document compiles. Thanks to JFrog Bintray for hosting the large LaTeX resource files!
  • Tectonic has sophisticated logic and automatically loops TeX and BibTeX as needed, and only as much as needed. In its default mode it doesn’t write TeX’s intermediate files and always produces a fully-processed document.
  • The tectonic command-line program is quiet and never stops to ask for input.
  • Thanks to the power of XeTeX, Tectonic can use modern OpenType fonts and is fully Unicode-enabled.
  • The Tectonic engine has been extracted into a completely self-contained library so that it can be embedded in other applications.
  • Tectonic has been forked from the old-fashioned WEB2C implementation of TeX and is developed in the open on GitHub using modern tools like the Rust language.
https://tectonic-typesetting.github.io/en-US/