Pages

Friday, March 31, 2017

Lisp Info



Articulate Common Lisp:  How to Write Lisp in 2017 - http://articulate-lisp.com/

Earth Enterprise

"Earth Enterprise is the open source release of Google Earth Enterprise, a geospatial application which provides the ability to build and host custom 3D globes and 2D maps. Earth Enterprise does not provide a private version of Google imagery that's currently available in Google Maps or Earth.
The application suite consists of three core components:
  • Fusion - imports and 'fuses' imagery, vector and terrain source data in to a single flyable 3D globe or 2D map.
  • Server - Apache or Tornado-based server which hosts the private globes built by Fusion.
  • Client - the Google Earth Enterprise Client (EC) and Google Maps Javascript API V3 used to view 3D globes and 2D maps, respectively.
Earth Enterprise Fusion & Server currently run on 64-bit versions of the following operating systems:
  • Red Hat Enterprise Linux versions 6.0 to 7.2, including the most recent security patches
  • CentOS 6.0 to 7.2
  • Ubuntu 10.04, 12.04 and 14.04 LTS
Refer to the wiki for instructions on building from source on one of these platforms."

https://github.com/google/earthenterprise/

https://github.com/google/earthenterprise/wiki/Build-Instructions

ProjectQ

"We introduce ProjectQ, an open source software effort for quantum computing. The first release features a compiler framework capable of targeting various types of hardware, a high-performance simulator with emulation capabilities, and compiler plug-ins for circuit drawing and resource estimation. We introduce our Python-embedded domain-specific language, present the features, and provide example implementations for quantum algorithms. The framework allows testing of quantum algorithms through simulation and enables running them on actual quantum hardware using a back-end connecting to the IBM Quantum Experience cloud service. Through extension mechanisms, users can provide back-ends to further quantum hardware, and scientists working on quantum compilation can provide plug-ins for additional compilation, optimization, gate synthesis, and layout strategies."

https://arxiv.org/abs/1612.08091

https://projectq.ch/

Oger

"The OrGanic Environment for Reservoir computing (Oger) toolbox is a Python toolbox, released under the LGPL, for rapidly building, training and evaluating modular learning architectures on large datasets. It builds functionality on top of the Modular toolkit for Data Processing (MDP). Using MDP, Oger provides:
  • Easily building, training and using modular structures of learning algorithms
  • A wide variety of state-of-the-art machine learning methods, such as PCA, ICA, SFA, RBMs, ... You can find the full list here.
The Oger toolbox builds functionality on top of MDP, such as:
In addition, several additional MDP nodes are provided by Oger, such as a:
  • Reservoir node
  • Leaky reservoir node
  • Ridge regression node
  • Conditional Restricted Boltzmann Machine (CRBM) node
  • Perceptron node
See here for instructions on downloading and installing the toolbox.

There is a general tutorial and examples highlighting some key functions of Oger here. A pdf version of the tutorial pages is here."

http://organic.elis.ugent.be/oger

http://reservoir-computing.org/

reservoir computing

"Reservoir computing is a framework for computation that may be viewed as an extension of neural networks.[1] Typically an input signal is fed into a fixed (random) dynamical system called a reservoir and the dynamics of the reservoir map the input to a higher dimension. Then a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that the training is performed only at the readout stage and the reservoir is fixed. Liquid-state machines [2] and echo state networks [3] are two major types of reservoir computing.

Types of reservoir computing are:

Context reverberation network

An early example of reservoir computing was the context reverberation network .[5] In this architecture, an input layer feeds into a high dimensional dynamical system which is read out by a trainable single-layer perceptron. Two kinds of dynamical system were described: a recurrent neural network with fixed random weights, and a continuous reaction-diffusion system inspired by Alan Turing’s model of morphogenesis. At the trainable layer, the perceptron associates current inputs with the signals that reverberate in the dynamical system; the latter were said to provide a dynamic "context" for the inputs. In the language of later work, the reaction-diffusion system served as the reservoir.

Echo state network

Main article: Echo state network

Backpropagation-decorrelation

Backpropagation-Decorrelation (BPDC)

Liquid-state machine

Main article: Liquid-state machine

Reservoir Computing for Structured Data

The Tree Echo State Network [6] (TreeESN) model represents a generalization of the Reservoir Computing framework to tree structured data.


http://reservoir-computing.org/

https://en.wikipedia.org/wiki/Reservoir_computing

https://www.researchgate.net/publication/221166209_An_overview_of_reservoir_computing_Theory_applications_and_implementations



TOOLS

Oger - http://organic.elis.ugent.be/oger 

Thursday, March 30, 2017

spatial computing


Organizing the Aggregate: Languages for Spatial Computing - https://arxiv.org/abs/1202.5509

Resiliency with Aggregate Computing - https://arxiv.org/abs/1607.02231

Space-Time Programming - http://web.mit.edu/jakebeal/www/Publications/PTRSA2015-Space-Time-Programming-survey-preprint.pdf

Dialectics for New Computer Science - http://lambda-the-ultimate.org/node/5392

The Foundation of Self-developing Blob Machines for Spatial Computing - http://blob.lri.fr/publication/physisicaD2008.pdf

BLOB Computing - http://pages.saclay.inria.fr/olivier.temam/files/eval/GLRT04.pdf

Unconventional Programming Paradigms - https://www.ercim.eu/EU-NSF/UPP04-proceedings.pdf

Thinking the Unthinkable - http://tomasp.net/academic/drafts/unthinkable/unthinkable-ppig.pdf

Dreamsongs - http://dreamsongs.com/Essays.html

ArXiV Emerging Computer Technologies - https://arxiv.org/list/cs.ET/recent

Deep Reservoir Computing Using Cellular Automata - https://arxiv.org/abs/1703.02806

Protelis - http://protelis.github.io/

Blackadder

"Blackadder is PURSUIT’s new prototype implementation of an information-centric networking environment. It exports a pure publish/subscribe service model to applications, which allows for publish and subscribe operations in a DAG-based information model.

Features include:
  • Supported on Linux (currently working on FreeBSB port)
  • User and kernel space implementation
  • Pub/sub API operating on information graph of labels
  • C++ library for developing applications and wrappers of this library for C, Java, Python and Ruby
  • Realizes core functions of rendezvous, topology management and forwarding
  • Implements 4 different dissemination strategies within the PURSUIT functional model
  • Runs over Ethernet or raw IP (therefore can run in native L2 networks, via openVPN overlay or directly in the Internet)
  • Open source code available
  • Uses the Click Router platform for ease of development
Blackadder is hosted on Github as an open source project. PURSUIT will merge its own developments into the main branch upon approval through the project. Anybody else can clone his or her own project and request merging into the main branch through Github.

You can download the source code as well as How-To and API documentation from Github here.


http://www.fp7-pursuit.eu/PursuitWeb/?page_id=338

https://github.com/fp7-pursuit/blackadder  

http://www.read.cs.ucla.edu/click/projects 

Quagga

"Quagga is a routing software suite, providing implementations of OSPFv2, OSPFv3, RIP v1 and v2, RIPng and BGP-4 for Unix platforms, particularly FreeBSD, Linux, Solaris and NetBSD. Quagga is a fork of GNU Zebra which was developed by Kunihiro Ishiguro. The Quagga tree aims to build a more involved community around Quagga than the current centralised model of GNU Zebra.

The Quagga architecture consists of a core daemon, zebra, which acts as an abstraction layer to the underlying Unix kernel and presents the Zserv API over a Unix or TCP stream to Quagga clients. It is these Zserv clients which typically implement a routing protocol and communicate routing updates to the zebra daemon.

Support for OSPFv3 and IS-IS is various beta states currently; IS-IS for IPv4 is believed to be usable while OSPFv3 and IS-IS for IPv6 have known issues.

Additionally, the Quagga architecture has a rich development library to facilitate the implementation of protocol/client daemons, coherent in configuration and administrative behaviour.

Quagga daemons are each configurable via a network accessible CLI (called a 'vty'). The CLI follows a style similar to that of other routing software. There is an additional tool included with Quagga called 'vtysh', which acts as a single cohesive front-end to all the daemons, allowing one to administer nearly all aspects of the various Quagga daemons in one place.

Please see the Documentation for further detailed information."

http://www.nongnu.org/quagga/

http://www.zebra.org/

http://nrg.cs.ucl.ac.uk/vrouter/platform/index.html

XORP

"XORP is a modular, extensible, open source networking platform that can be leveraged by:
XORP code can be downloaded and used to build a fully functional PC-based router. A basic implementation of XORP is also available as a Live CD--i.e, a downloadable CD image that can be burned to a bootable CD. This allows XORP to be run without installing any additional software, understanding how XORP works internally, or knowing anything about Linux/Unix system administration.

XORP has a single unified command line interface (CLI) which is used to configure routing protocols and network interfaces. XORP's CLI can be extended to encompass additional router functionality such as queue manangement, QoS configuration, firewalls, NATs and DHCP configuration. The XORP architecture also permits different routing protocols to run in different security "sandboxes", offering the potential for greater robustness and security than alternative router platforms.

XORP currently supports both IPv4 and IPv6 versions of BGP4+, OSPFv2, OSPFv3, RIP and RIPng for unicast routing, and PIM-SM and IGMP/MLD for multicast. XORP runs on most Linux and *BSD distributions."

http://www.xorp.org/

http://www.read.cs.ucla.edu/click/projects

Vuvuzela

"Vuvuzela is a messaging system that protects the privacy of message contents and message metadata. Users communicating through Vuvuzela do not reveal who they are talking to, even in the presence of powerful nation-state adversaries. Our SOSP 2015 paper explains the system, its threat model, performance, limitations, and more. Our SOSP 2015 slides give a more graphical overview of the system.

Vuvuzela is the first system that provides strong metadata privacy while scaling to millions of users. Previous systems that hide metadata using Tor (such as Pond) are prone to traffic analysis attacks. Systems that encrypt metadata using techniques like DC-nets and PIR don't scale beyond thousands of users.

Vuvuzela uses efficient cryptography (NaCl) to hide as much metadata as possible and adds noise to metadata that can't be encrypted efficiently. This approach provides less privacy than encrypting all of the metadata, but it enables Vuvuzela to support millions of users. Nonetheless, Vuvuzela adds enough noise to thwart adversaries like the NSA and guarantees differential privacy for users' metadata.

Vuvuzela is unable to encrypt two kinds of metadata: the number of idle users (connected users without a conversation partner) and the number of active users (users engaged in a conversation). Without noise, a sophisticated adversary could use this metadata to learn who is talking to who. However, the Vuvuzela servers generate noise that perturbs this metadata so that it is difficult to exploit."

https://vuvuzela.io/

https://github.com/vuvuzela/vuvuzela

https://github.com/jlmart88/vuvuzela-web-client



lwIP

"lwIP (lightweight IP) is a widely used open source TCP/IP stack designed for embedded systems. lwIP was originally developed by Adam Dunkels at the Swedish Institute of Computer Science and is now developed and maintained by a worldwide network of developers.

The focus of the lwIP TCP/IP implementation is to reduce resource usage while still having a full-scale TCP.[3] This makes lwIP suitable for use in embedded systems with tens of kilobytes of free RAM and room for around 40 kilobytes of code ROM.

The features:

Internet layer
  • IP (Internet Protocol) including packet forwarding over multiple network interfaces
  • ICMP (Internet Control Message Protocol) for network maintenance and debugging
  • IGMP (Internet Group Management Protocol) for multicast traffic management
Transport layer
  • UDP (User Datagram Protocol) including experimental UDP-Lite extensions
  • TCP (Transmission Control Protocol) with congestion control, RTT estimation and fast recovery/fast retransmit
Application layer
  • DNS (Domain Name System)
  • SNMP (Simple Network Management Protocol)
  • DHCP (Dynamic Host Configuration Protocol)
Link layer
  • PPP (Point-to-Point Protocol)
  • ARP (Address Resolution Protocol) for Ethernet
Other
https://en.wikipedia.org/wiki/LwIP

https://savannah.nongnu.org/projects/lwip/

https://en.wikipedia.org/wiki/UIP_%28micro_IP%29

Xv6

"Xv6 is a teaching operating system developed in the summer of 2006 for MIT's operating systems course, 6.828: Operating System Engineering. We hope that xv6 will be useful in other courses too. This page collects resources to aid the use of xv6 in other courses, including a commentary on the source code itself.

For many years, MIT had no operating systems course. In the fall of 2002, one was created to teach operating systems engineering. In the course lectures, the class worked through Sixth Edition Unix (aka V6) using John Lions's famous commentary. In the lab assignments, students wrote most of an exokernel operating system, eventually named Jos, for the Intel x86. Exposing students to multiple systems–V6 and Jos–helped develop a sense of the spectrum of operating system designs.

V6 presented pedagogic challenges from the start. Students doubted the relevance of an obsolete 30-year-old operating system written in an obsolete programming language (pre-K&R C) running on obsolete hardware (the PDP-11). Students also struggled to learn the low-level details of two different architectures (the PDP-11 and the Intel x86) at the same time. By the summer of 2006, we had decided to replace V6 with a new operating system, xv6, modeled on V6 but written in ANSI C and running on multiprocessor Intel x86 machines. Xv6's use of the x86 makes it more relevant to students' experience than V6 was and unifies the course around a single architecture. Adding multiprocessor support requires handling concurrency head on with locks and threads (instead of using special-case solutions for uniprocessors such as enabling/disabling interrupts) and helps relevance. Finally, writing a new system allowed us to write cleaner versions of the rougher parts of V6, like the scheduler and file system. 6.828 substituted xv6 for V6 in the fall of 2006."

https://pdos.csail.mit.edu/6.828/2016/xv6.html

https://github.com/aclements/sv6


Commentary on the Sixth Edition UNIX Operating System

"This directory contains a copy of John Lion's “A commentary on the Sixth Edition UNIX Operating System”. This form of the document is the one that Warren Toomey published on the USENET alt.folklore.computers newsgroup in May 1994. It's available in several forms."

http://www.lemis.com/grog/Documentation/Lions/

https://pdos.csail.mit.edu/6.828/2016/xv6.html

latexrun

"latexrun fits LaTeX into a modern build environment. It hides LaTeX's circular dependencies, surfaces errors in a standard and user-friendly format, and generally enables other tools to do what they do best.

The features:
  • Runs latex the right number of times. LaTeX's iterative approach is a poor match for build tools that expect to run a task once and be done with it. latexrun hides this complexity by running LaTeX (and BibTeX) as many times as necessary and no more. Only the results from the final run are shown, making it act like a standard, single-run build task.
  • Surfaces error messages and warnings. LaTeX and related tools bury errors and useful warnings in vast wastelands of output noise. latexrun prints only the messages that matter, in a format understood by modern tools. latexrun even figures out file names and line numbers for many BibTeX errors that usually don't indicate their source.

      paper.tex:140: Overfull \hbox (15pt too wide)
      paper.tex:400: Reference `sec:eval' on page 5 undefined
      local.bib:230: empty booktitle in clements:commutativity
     
  • Incremental progress reporting. latexrun keeps you informed of LaTeX's progress, without overwhelming you with output.
  • Cleaning. LaTeX's output files are legion. latexrun keeps track of them and can clean them up for you.
  • Atomic commit. latexrun updates output files atomically, which means your PDF reader will no longer crash or complain about broken xref tables when it catches latex in the middle of writing out a PDF.
  • Easy {.git,.hg,svn:}ignore. Just ignore latex.out/. Done!
  • Self-contained. latexrun is a single, self-contained Python script that can be copied in to your source tree so your collaborators don't have to install it.
Kitchen sink not included. latexrun is not a build system. It will not convert your graphics behind your back. It will not continuously monitor your files for changes. It will not start your previewer for you. latexrun is designed to be part of your build system and let other tools do what they do well."

https://github.com/aclements/latexrun

http://norswap.com/latex-tooling/

Wednesday, March 29, 2017

OpenGTS

"OpenGTS™ ("Open GPS Tracking System") is the first available open source project designed specifically to provide web-based GPS tracking services for a "fleet" of vehicles.

OpenGTS not only supports the data collection and storage of GPS Tracking and Telemetry data from remote devices, but also includes the following rich set of features:
  • Web-based authentication: Each account can support multiple users, and each user has its own login password and controlled access to sections within their account.
  • GPS tracking device independent: Devices from different manufacturers can be tracked simultaneously. Support for the following GPS tracking devices is included with OpenGTS:
    • Most TK102/TK103 tracking devices (using the common TK102/TK103 protocols).
    • Astra Telematics AT240, AT110, AT210
    • Sanav GC-101, MT-101, and CT-24 Personal Tracker (HTTP-based protocol)
      Sanav GX-101 Vehicle Tracker (HTTP-based protocol)
    • CelltracGTS™/Free for Android phones
    • CelltracGTS™/Pro for Android phones
    • Aspicore GSM Tracker (Nokia, Samsung, Sony Ericsson phones)
    • TAIP (Trimble ASCII Interface Protocol).
    • TrackStick GPS data logger
    • "GPSMapper" capable phones.
    • "NetGPS" capable devices.
    With custom coding, other devices can also be integrated as well using the included example "template" device communication server.
  • Customizable web-page decorations: The look and feel of the tracking web site can easily be customized to fit the motif of the specific company.
  • Customizable mapping service: OpenGTS comes with support for OpenLayers/OpenStreetMap in addition to support for Google Maps, Microsoft Virtual Earth, and Mapstraction (which provides mapping support for MultiMap, Map24, MapQuest, and more). Within the OpenGTS framework, other mapping service providers can also easily be integrated.
  • Customizable reports: Using an internal XML-based reporting engine, detail and summary reports can be customized to show historical data for a specific vehicle, or for the fleet.
  • Customizable geofenced areas: Custom geofenced areas (geozones) can be set up to provide arrival/departure events on reports. Each geozone can also be named to provide a custom 'address' which is displayed on reports when inside the geozone (for instance "Main Office").
  • Operating system independent: OpenGTS itself is written entirely in Java, using technologies such as Apache Tomcat for web service deployment, and MySQL for the datastore. As such, OpenGTS will run on any system which supports these technologies (including Linux, Mac OS X, FreeBSD, OpenBSD, Solaris, Windows XP, Windows Vista, Windows 20XX, and more).
  • i18n Compliant: OpenGTS is i18n compliant and supports easy localization (L10N) to languages other than English. Languages supported currently include Dutch, English, French, German, Greek, Hungarian, Italian, Portuguese, Romanian, Russian, Slovak, Spanish, Serbian, and Turkish. 
http://opengts.sourceforge.net/

http://opengts.sourceforge.net/documentation.html

https://www.wmo.int/pages/prog/amp/mmop/.../DBCP-32FinalReportCG_V8_ECh.pdf

csv2nc

"Routines to convert GCOOS CSV files as generated from the GCOOS WAF (http://data.gcoos.org/data/waf) to a netCDF4 (Classic) in compliance to IOOS standard based on the NCEI recommendations at https://sites.google.com/a/noaa.gov/ncei-ioos-archive/cookbook?pli=1#TOC-Providing-Data-Integrity and in compliance with the NODC Profile Orthogonal specification at http://www.nodc.noaa.gov/data/formats/netcdf/v1.1/profileOrthogonal.cdl

Other links of interest includes:
https://github.com/GCOOS/csv2nc

WAF

This describes how to host and manage your own Web Accessible Folder for ISO metadata. Issues encountered in the community will be summarized to facilitate WAFs. The setting-up of a WAF is strongly encouraged to ease the harvesting process by the IOOS Registry (NGDC EMMA).

Guidance for hosting a WAF 1. Creating and hosting your own Web Accessible Folder of ISO metadata
  • Using ncISO to create metadata for THREDDS catalogs
  • Tips for creating well curated THREDDS Catalogs
  • ncISO Home
  • Pulling ISO records from ERDDAP
  • Getting ncISO (or thredds_crawler) to parse multiple service types from catalog entries (issue #36)
  • OWS to ISO: Creating an ISO document from an OGC Web Services GetCapabilities document (or URL)
  • Simple python script to run ncISO and create a WAF
https://github.com/ioos/registry/wiki/Hosting-Your-Own-WAF

https://github.com/ioos/registry/wiki/Python-Scripts-for-creating-WAFs

URN

"The functional requirements for Uniform Resource Names were described in 1994 by RFC 1737,[1] and the syntax was defined in 1997 in RFC 2141[2]. In these standards, URNs were conceived to be part of a three-part information architecture for the Internet, along with Uniform Resource Locators (URLs) and Uniform Resource Characteristics (URCs), a metadata framework. URNs were distinguished from URLs, which identify resources by specifying their locations in the context of a particular access protocol, such as HTTP or FTP. In contrast, URNs were conceived as persistent, usually opaque or at least location-independent, identifiers assigned within defined namespaces, typically by an authority responsible for the namespace, so that they are globally unique and persistent over long periods of time, even after the resource which they identify ceases to exist or becomes unavailable.

Use of the terms "Uniform Resource Name" and "Uniform Resource Locator" has been deprecated in technical standards in favor of the term Uniform Resource Identifier (URI), which encompasses both.   A URI is a string of characters used to identify or name a resource. URIs are used in many Internet protocols to refer to and access information resources. URI schemes include the familiar http, as well as hundreds of others.  In the "contemporary view", as it is called, all URIs identify or name resources, perhaps uniquely and persistently, with some of them also being "locators" which are resolvable in conjunction with a specified protocol to a representation of the resources.

Other URIs are not locators and are not necessarily resolvable within the bounds of the systems where they are found. These URIs may serve as names or identifiers of resources. Since resources can move, opaque identifiers which are not locators and are not bound to particular locations are arguably more likely than identifiers which are locators to remain unique and persistent over time. But whether a URI is resolvable depends on many operational and practical details, irrespective of whether it is called a "name" or a "locator". In the contemporary view, there is no bright line between "names" and "locators". In accord with this way of thinking, the distinction between Uniform Resource Names and Uniform Resource Locators is now no longer used in formal Internet Engineering Task Force technical standards, though the latter term, URL, is still in wide informal use.

The term "URN" continues now as one of more than a hundred URI "schemes", urn:, paralleling http:, ftp:, and so forth. URIs of the urn: scheme are not necessarily locators; are not required to be associated with a particular protocol or access method; and need not be resolvable. They should be assigned by a procedure which provides some assurance that they will remain unique and identify the same resource persistently over a prolonged period. Some namespaces under the urn: scheme, such as urn:uuid: assign identifiers in a manner which does not require a registration authority, but most of them do. A typical URN namespace is urn:isbn, for International Standard Book Numbers."

https://en.wikipedia.org/wiki/Uniform_Resource_Name

IOOS Observing Asset Identifiers

An IOOS identifier is Uniform Resource Identifier (URI). URIs are commonly used as identifiers in the Internet’s information architecture; an introductory description of URIs may be found on Wikipedia. In fact, IOOS identifiers are based on a specific form of URI, which is called a Uniform Resource Name (URN), which was designed for the identification of resources in particular namespaces. The URN syntax is described in the Internet Engineering Task Force (IETF) Request for Comments (RFC) 2141; the definitions and restrictions established by the RFC 2141 determine the syntactic structure of the IOOS identifiers.

All URIs assigned by IOOS begin with the string urn:ioos:, followed by one or more fields also separated by colons (:). The initial urn: indicates that the URI is indeed a URN. The following ioos: indicates that the URN is in the IOOS namespace.   The ioos namespace has not been formally registered with IANA; therefore, it is rather informal, community-wide namespace.

The additional fields may only include letters and numbers (A-Z, a-z, 0-9) and the following characters: _( ) + , - . = @ ; $ _ ! *_ . Special characters not in the foregoing list must be represented using hexadecimal encoding as %xx, where xx represents a two-digit hex value. The use of such characters in IOOS URNs is not recommended.

http://ioos.github.io/conventions-for-observing-asset-identifiers/ioos_assets_v1.0/

http://ioos.github.io/conventions-for-observing-asset-identifiers/

http://ioos.github.io/


assetid

An ocean data asset ID parser developed and used by Axiom Data Science.

https://github.com/axiom-data-science/assetid

IOOS

"The Integrated Ocean Observing System (IOOS) is an organization of systems that routinely and continuously provides quality controlled data and information on current and future states of the oceans and Great Lakes from the global scale of ocean basins to local scales of coastal ecosystems. It is a multidisciplinary system designed to provide data in forms and at rates required by decision makers to address seven societal goals.

IOOS is developing as a multi-scale system that incorporates two, interdependent components, a global ocean component, called the Global Ocean Observing System, with an emphasis on ocean-basin scale observations and a coastal component that focuses on local to Large Marine Ecosystem (LME) scales. Large Marine Ecosystems (LMEs) in U.S. coastal waters and IOOS Regional Associations.

The coastal component consists of Regional Coastal Ocean Observing Systems (RCOOSs) nested in a National Backbone of coastal observations. From a coastal perspective, the global ocean component is critical for providing data and information on basin scale forcings (e.g., ENSO events), as well as providing the data and information necessary to run coastal models (such as storm surge models).

Alaska Ocean Observing System AOOS
Central California Ocean Observing System CeNCOOS
Great Lakes Observing System GLOS
Gulf of Maine Ocean Observing System GoMOOS
Gulf of Mexico Coastal Ocean Observing System GCOOS
Pacific Islands Ocean Observing System PacIOOS
Mid-Atlantic Coastal Ocean Observing Regional Association MACOORA
Northwest Association of Networked Ocean Observing Systems NANOOS
Southern California Coastal Ocean Observing System SCCOOS
Southeast Coastal Ocean Observing Regional Association SECOORA
Caribbean Integrated Ocean Observing System CarICOOS

https://ioos.noaa.gov/

 IOOS GitHub Pages

  • Projects


  • Guidelines and specifications  Data Demo Center

    The IOOS Notebook Gallery is a collection of tutorials and examples of how to access and utilize the many IOOS technologies and data sources available. This site is geared towards scientists and environmental managers interested in “diving deep” into the numbers and creating orginal plots and data analysis. Most notebooks will be examples using Python code. Over time, we plan to include notebooks with Matlab, R, and Arc GIS code as well. The notebooks will come from a variety of authors including IOOS Program Office Staff, Regional Association data managers, and other IOOS partners.

    1. Installing the IOOS conda environment
    2. Opening netCDF files - hints from AODN
    3. Unidata Jupyter notebook gallery
    4. Extracting and enriching OBIS data with R
    5. USGS-R examples

    http://ioos.github.io/notebooks_demos/ 
  • TerriaJS

    "TerriaJS is a library for building rich, web-based geospatial data explorers, used to drive National Map, AREMI and NEII Viewer. It uses Cesium and WebGL for a full 3D globe in the browser with no plugins. It gracefully falls back to 2D with Leaflet on systems that can't run Cesium. It can handle catalogs of thousands of layers, with dozens of geospatial file and web service types supported. It is almost entirely JavaScript in the browser, meaning it can even be deployed as a static website, making it simple and cheap to host.

    The features include:
    • Nested catalog of layers which can be independently enabled to create mashups of many layers.
    • Supports GeoJSON, KML, CSV (point and region-mapped), GPX and CZML file types natively, and others including zipped shapefiles with an optional server-side conversion service.
    • Supports WMS, WFS, Esri MapServer, ABS ITT, Bing Maps, OpenStreetMap-style raster tiles, Mapbox, Urthecast, and WMTS item types.
    • Supports querying WMS, WFS, Esri MapServer, CSW, CKAN and Socrata services for groups of items.
    • 3D globe (Cesium) or 2D mode (Leaflet). 3D objects supported in CZML format.
    • Time dimensions supported for CSV, CZML, WMS. Automatically animate layers, or slide the time control forward and backward.
    • Drag-and-drop files from your desktop the browser, for instant visualisation (no file upload to server required).
    • Wider range of file types supported through server-side OGR2OGR service (requires upload).
    • All ASGS (Australian Statistical Geographic Standard) region types (LGA, SA2, commonwealth electoral district etc) supported for CSV region mapping, plus several others: Primary Health Networks, Statistical Local Areas, ISO 3 letter country codes, etc.
    • Users can generate a reusable URL link of their current map view, to quickly share mashups of web-hosted data. Google's URL shortener is optionally used.
    Components and naming:
    • Terria™ is the overall name for the spatial data platform, including closed-source spatial analytics developed by Data61.
    • TerriaJS is this JavaScript library consisting of the 2D/3D map, catalog management and many spatial data connectors.
    • Cesium is the 3D WebGL rendering library used by TerriaJS, which provides many low-level functions for loading and displaying imagery and spatial formats such as GeoJSON and KML.
    • TerriaMap is a complete website starting point, using TerriaJS.
    • TerriaJS-Server is a NodeJS-based server that provides proxying and support services for TerriaJS.
    • NationalMap is the flagship Terria deployment, and the origin of the TerriaJS library.
    Related components:
    • Catalog Editor, an automatically generated web interface for creating and editing catalog (init) files.
    • Generate-TerriaJS-Schema, a tool which automatically generates a schema for validating catalog files, and also the editor, by processing TerriaJS source code.
    • TerriaMapStatic, a pre-built version of TerriaMap, which can be deployed as a static HTML website, such as on Github Pages.
    https://github.com/TerriaJS/terriajs

    http://terria.io/

    Numpy

    "NumPy is the fundamental package for scientific computing with Python. It contains among other things:
    • a powerful N-dimensional array object
    • sophisticated (broadcasting) functions
    • tools for integrating C/C++ and Fortran code
    • useful linear algebra, Fourier transform, and random number capabilities
    Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases.

    NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy address the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays, requiring (re)writing some code, mostly inner loops using NumPy.

    Using NumPy in Python gives functionality comparable to MATLAB since they are both interpreted,[3] and they both allow the user to write fast programs as long as most operations work on arrays or matrices instead of scalars. In comparison, MATLAB boasts a large number of additional toolboxes, notably Simulink, whereas NumPy is intrinsically integrated with Python, a more modern and complete programming language. Moreover, complementary Python packages are available; SciPy is a library that adds more MATLAB-like functionality and Matplotlib is a plotting package that provides MATLAB-like plotting functionality. Internally, both MATLAB and NumPy rely on BLAS and LAPACK for efficient linear algebra computations.

    The core functionality of NumPy is its "ndarray", for n-dimensional array, data structure. These arrays are strided views on memory.[4] In contrast to Python's built-in list data structure (which, despite the name, is a dynamic array), these arrays are homogeneously typed: all elements of a single array must be of the same type.

    Such arrays can also be views into memory buffers allocated by C/C++, Cython, and Fortran extensions to the CPython interpreter without the need to copy data around, giving a degree of compatibility with existing numerical libraries. This functionality is exploited by the SciPy package, which wraps a number of such libraries (notably BLAS and LAPACK). NumPy has built-in support for memory-mapped ndarrays."

    http://www.numpy.org/

    https://github.com/numpy/numpy

    https://morepypy.blogspot.com/2011/05/numpy-in-pypy-status-and-roadmap.html

    http://pypy.org/numpydonate.html

    http://cython.readthedocs.io/en/latest/src/tutorial/numpy.html

    http://deeplearning.net/software/theano/introduction.html

    http://dask.pydata.org/en/latest/examples/array-numpy.html

    http://ipython-books.github.io/featured-01/



    Kona

    "Kona is the open-source implementation of the K programming language. K is a synthesis of APL and LISP. Although many of the capabilities come from APL, the fundamental data construct is quite different. In APL the construct is a multi-dimensional matrix-like array, where the dimension of the array can range from 0 to some maximum (often 9). In K, like LISP, the fundamental data construct is a list. Also, like LISP, the K language is ASCII-based, so you don't need a special keyboard.

    For many people, K was the preferred APL dialect. When it was available, it tended to be popular with investment bankers, the performance obsessed, and analysts dealing with lots of data. It is a demanding language.

    K was originally designed by Arthur Whitney and Kx Systems. Praise for K should be taken to refer to Kx's K. Kx sells a popular database called KDB+. People can and do create networked trading platforms in hours. If your business needs production support, you can evaluate KDB+ prior to purchasing from Kx, or possibly speak with Kx consulting partner First Derivatives. The 32-bit version of KDB+ is available for free.

    Kx's KDB+ uses the Q language, and is built on top of K4. Kx used to sell a database called KDB, which used the KSQL language, and was built on top of K3. Earlier, Kx sold K2 as its primary product. Before K2, UBS had a 5-year exclusive license to K1. To the confusion of all, these terms are used interchangeably. Kx's K3, K2 and K1 are basically no longer available. While you get K4 with KDB+, K4 is proprietary to Kx and no documentation is available. Kona is a reimplementation that targets K3 but includes features inferred from K4 or implemented elsewhere. Kona is unaffiliated with Kx."

    https://github.com/kevinlawler/kona

    http://lambda-the-ultimate.org/node/4248

    http://www.hakank.org/k/

    https://en.wikipedia.org/wiki/K_%28programming_language%29

    https://paulbatchelor.github.io/blog/posts/2016-03-27-konasporth.html

    https://scottlocklin.wordpress.com/2012/09/18/a-look-at-the-j-language-the-fine-line-between-genius-and-insanity/

    A+

    "A+ is a descendent of the language "A" created in 1988 by Arthur Whitney at Morgan Stanley. At the time, various departments had a significant investment in APL applications and talent, APL being a language well-suited to the manipulation of large arrays of numbers. As technology was moving from the mainframe to distributed systems, there was a search for a suitable APL implementation to run on SunOS, the distributed platform of the period. Not happy with the systems evaluated, Arthur, motivated by management, wrote one geared to the business: large capacity, high performance. He was joined in his efforts as the language took on graphics, systems' interfaces, utility support, and an ever-widening user community. Over the course of the next few years, as the business began to reap tangible value from the efforts, the pieces were shaped into a consistent whole and became A+. The "+" referred to the electric graphical user interface. An A+ development group was formally created in 1992.

    A+ soon became the language of choice for development of Fixed Income applications. It offered familiarity to the APL programmers, the advantages of an interpreter in a fast-paced development arena and admirable floating-point performance. A significant driver was that many of Morgan Stanley's best and brightest were the developers and supporters of the language. Through their practical application of technical values, they instilled fervent enthusiasm in talented programmers, regardless of their programming language backgrounds.

    A+ is a powerful and efficient programming language. It is freely available under the GNU General Public License. It embodies a rich set of functions and operators, a modern graphical user interface with many widgets and automatic synchronization of widgets and variables, asynchronous execution of functions associated with variables and events, dynamic loading of user compiled subroutines, and many other features. Execution is by a rather efficient interpreter. A+ was created at Morgan Stanley. Primarily used in a computationally-intensive business environment, many critical applications written in A+ have withstood the demands of real world developers over many years. Written in an interpreted language, A+ applications tend to be portable. "

    http://www.aplusdev.org/index.html

    https://news.ycombinator.com/item?id=13973812

    Co-dfns

    "The Co-dfns project aims to provide a high-performance, high-reliability compiler for a parallel extension of the Dyalog dfns programming language. The dfns language is a functionally oriented, lexically scoped dialect of APL. The Co-dfns language extends the dfns language to include explicit task parallelism with implicit structures for synchronization and determinism. The language is designed to enable rigorous formal analysis of programs to aid in compiler optimization and programmer productivity, as well as in the general reliability of the code itself.

    Our mission is to deliver scalable APL programming to information and domain experts across many fields, expanding the scope and capabilities of what you can effectively accomplish with APL.

    Co-dfns follows a rapid release cycle. Releases can be found here:

    https://github.com/arcfide/Co-dfns/releases

    http://arcfide.github.io/Co-dfns/

    https://github.com/arcfide/Co-dfns

    https://news.ycombinator.com/item?id=13565743

    J

    "J (J language) is a high-level, general-purpose, high-performance programming language. J is portable and runs on 32/64-bit Windows/Linux/Mac as well as iOS, Android, and other platforms. J source (required only if Jsoftware binaries don't meet your requirements) is available under both commercial and GPL 3 license. J systems can be installed and distributed for free.

    J is particularly strong in the mathematical, statistical, and logical analysis of data. It is a powerful tool in building new and better solutions to old problems and even better at finding solutions where the problem is not already well understood.
    J systems have:
    • an integrated development environment
    • standard libraries, utilities, and packages
    • console, browser, and Qt front ends
    • interfaces with other programming languages and applications
    • integrated graphics
    • memory mapped files for high performance data applications
    J is an ASCII-only extension of APL and Ken Iverson - the creator of APL - was involved in the making of J."

    http://www.jsoftware.com/

    https://curtisautery.appspot.com/5776042744610816

    Tuesday, March 28, 2017

    eofs

    "eofs is a Python package for performing empirical orthogonal function (EOF) analysis on spatial-temporal data sets, licensed under the GNU GPLv3.

    The package was created to simplify the process of EOF analysis in the Python environment. Some of the key features are listed below:
    • Suitable for large data sets: computationally efficient for the large data sets typical of modern climate model output.
    • Transparent handling of missing values: missing values are removed automatically when computing EOFs and re-inserted into output fields.
    • Meta-data preserving interfaces (optional): works with the iris data analysis package, xarray, or the cdms2 module (from UV-CDAT) to carry meta-data over from input fields to output.
    • No Fortran dependencies: written in Python using the power of NumPy, no compilers required.
    eofs only requires the NumPy package (and setuptools to install). In order to use the meta-data preserving interfaces one (or more) of cdms2 (part of UV-CDAT), iris, or xarray is needed.

    Documentation is available online. The package docstrings are also very complete and can be used as a source of reference when working interactively."

    https://github.com/ajdawson/eofs

    http://ajdawson.github.io/eofs/

    https://anaconda.org/conda-forge/eofs

    SensorThings

    "SensorThings API[1] is an Open Geospatial Consortium (OGC) standard providing an open and unified framework to interconnect IoT sensing devices, data, and applications over the Web. It is an open standard addressing the syntactic interoperability and semantic interoperability of the Internet of Things. It complements the existing IoT networking protocols such CoAP, MQTT, HTTP, 6LowPAN. While the above-mentioned IoT networking protocols are addressing the ability for different IoT systems to exchange information, OGC SensorThings API is addressing the ability for different IoT systems to use and understand the exchanged information. As an OGC standard, SensorThings API also allows easy integration into existing Spatial Data Infrastructures or Geographic Information Systems.

    SensorThings API is designed specifically for resource-constrained IoT devices and the Web developer community. It follows REST principles, the JSON encoding, and the OASIS OData protocol and URL conventions. Also, it has an MQTT extension allowing users/devices to publish and subscribe updates from devices, and can use CoAP in addition to HTTP.

    The foundation of the SensorThings API is its data model that is based on the ISO 19156 (ISO/OGC Observations and Measurements), that defines a conceptual model for observations, and for features involved in sampling when making observations. In the context of the SensorThings, the features are modelled as Things, Sensors (i.e., Procedures in O&M), and Feature of Interests. As a result, the SensorThings API provides an interoperable Observation-focus view, that is particularly useful to reconcile the differences between heterogeneous sensing systems (e.g., in-situ sensors and remote sensors).

    An IoT device or system is modelled as a Thing. A Thing has an arbitrary number of Locations (including 0 Locations) and an arbitrary number of Datastreams (including 0 Datastreams). Each Datastream observes one ObservedProperty with one Sensor and has many Observations collected by the Sensor. Each Observation observes one particular FeatureOfInterest. The O&M based model allows SensorThings to accommodate heterogeneous IoT devices and the data collected by the devices.

    SensorThings API provides two main functionalities, each handled by a profile. The two profiles are the Sensing profile and the Tasking profile. The Sensing profile provides a standard way to manage and retrieve observations and metadata from heterogeneous IoT sensor systems, and the Sensing profile functions are similar to the OGC Sensor Observation Service. The Tasking profile provides a standard way for parameterizing - also called tasking - of task-able IoT devices, such as sensors or actuators. The Tasking profile functions are similar to the OGC Sensor Planning Service. The Sensing profile is designed based on the ISO/OGC Observations and Measurements (O&M) model, and allows IoT devices and applications to CREATE, READ, UPDATE, and DELETE (i.e., HTTP POST, GET, PATCH, and DELETE) IoT data and metadata in a SensorThings service.

    SensorThings API defines the following resources. As SensorThings is a RESTful web service, each entity can be CREATE, READ, UPDATE, and DELETE with standard HTTP verbs (POST, GET, PATCH, and DELETE):[4][5]
    • Thing: An object of the physical world (physical things) or the information world (virtual things) that is capable of being identified and integrated into communication networks.[6]
    • Locations: Locates the Thing or the Things it associated with.
    • HistoricalLocations: Set provides the current (i.e., last known) and previous locations of the Thing with their time.
    • Datastream: A collection of Observations and the Observations in a Datastream measure the same ObservedProperty and are produced by the same Sensor.
    • ObservedProperty : Specifies the phenomenon of an Observation.
    • Sensor : An instrument that observes a property or phenomenon with the goal of producing an estimate of the value of the property.
    • Observation: Act of measuring or otherwise determining the value of a property.[7]
    • FeatureOfInterest: An Observation results in a value being assigned to a phenomenon.The phenomenon is a property of a feature, the latter being the FeatureOfInterest of the Observation.
    https://en.wikipedia.org/wiki/SensorThings_API

    https://github.com/opengeospatial/sensorthings

    https://github.com/FraunhoferIOSB/SensorThingsServer

    https://github.com/nsommer/SensorThingsClient

    Tika

    "The Apache Tika™ toolkit detects and extracts metadata and text from over a thousand different file types (such as PPT, XLS, and PDF). All of these file types can be parsed through a single interface, making Tika useful for search engine indexing, content analysis, translation, and much more. You can find the latest release on the download page. Please see the Getting Started page for more information on how to start using Tika.

    The Parser and Detector pages describe the main interfaces of Tika and how they work.
    https://tika.apache.org/

    " A Python port of the Apache Tika library that makes Tika available using the Tika REST Server."

    https://github.com/chrismattmann/tika-python

    Stan

    "Thousands of users rely on Stan for statistical modeling, data analysis, and prediction in the social, biological, and physical sciences, engineering, and business.

    Users specify log density functions in Stan’s probabilistic programming language and get:
    • full Bayesian statistical inference with MCMC sampling (NUTS, HMC)
    • approximate Bayesian inference with variational inference (ADVI)
    • penalized maximum likelihood estimation with optimization (L-BFGS)
    Stan’s math library provides differentiable probability functions & linear algebra (C++ autodiff). Additional R packages provide expression-based linear modeling, posterior visualization, and leave-one-out cross-validation."

    http://mc-stan.org/

    "PyStan provides an interface to Stan, a package for Bayesian inference using the No-U-Turn sampler, a variant of Hamiltonian Monte Carlo."

    https://pystan.readthedocs.io/en/latest/

    xtensor

    "xtensor is a C++ library meant for numerical analysis with multi-dimensional array expressions.
    xtensor provides
    • an extensible expression system enabling lazy broadcasting.
    • an API following the idioms of the C++ standard library.
    • tools to manipulate array expressions and build upon xtensor.
    Containers of xtensor are inspired by NumPy, the Python array programming library. Adaptors for existing data structures to be plugged into our expression system can easily be written. In fact, xtensor can be used to process numpy data structures inplace using Python’s buffer protocol. For more details on the numpy bindings, check out the xtensor-python project.
    xtensor requires a modern C++ compiler supporting C++14. The following C+ compilers are supported:
    • On Windows platforms, Visual C++ 2015 Update 2, or more recent
    • On Unix platforms, gcc 4.9 or a recent version of Clang
     xtensor is a header-only library."

    http://quantstack.net/xtensor


    Python bindings for the xtensor C++ multi-dimensional array library.
    • xtensor is a C++ library for multi-dimensional arrays enabling numpy-style broadcasting and lazy computing.
    • xtensor-python enables inplace use of numpy arrays in C++ with all the benefits from xtensor
    The Python bindings for xtensor are based on the pybind11 C++ library, which enables seemless interoperability between C++ and Python.

    https://github.com/QuantStack/xtensor-python

    VOLK

    "VOLK is the Vector-Optimized Library of Kernels. It is a free library, currently offered under the GPLv3, that contains kernels of hand-written SIMD code for different mathematical operations. Since each SIMD architecture can be very different and no compiler has yet come along to handle vectorization properly or highly efficiently, VOLK approaches the problem differently.

    For each architecture or platform that a developer wishes to vectorize for, a new proto-kernel is added to VOLK. At runtime, VOLK will select the correct proto-kernel. In this way, the users of VOLK call a kernel for performing the operation that is platform/architecture agnostic. This allows us to write portable SIMD code that is optimized for a variety of platforms.

    VOLK was introduced as a part of GNU Radio in late 2010 based on code released in the public domain. In 2015 it was released as an independent library for use by a wider audience."

    http://libvolk.org/

    https://github.com/gnuradio/volk

    https://github.com/srsLTE/srsLTE

    pNFS


    "High-performance data centers have been aggressively moving toward parallel technologies like clustered computing and multi-core processors. While this increased use of parallelism overcomes the vast majority of computational bottlenecks, it shifts the performance bottlenecks to the storage I/O system. To ensure that compute clusters deliver the maximum performance, storage systems must be optimized for parallelism. Legacy Network Attached Storage (NAS) architectures based on NFS v4.0 and earlier have serious performance bottlenecks and management challenges when implemented in conjunction with large scale, high performance compute clusters.

    A consortium of storage industry technology leaders created a parallel NFS (pNFS) protocol as an optional extension of the NFS v4.1 standard. pNFS takes a different approach by allowing compute clients to read and write directly to the storage, eliminating filer head bottlenecks and allowing single file system capacity and performance to scale linearly.

    pNFS removes the performance bottleneck in traditional NAS systems by allowing the compute clients to read and write data directly and in parallel, to and from the physical storage devices. The NFS server is used only to control metadata and coordinate access, allowing incredibly fast access to very large data sets from many clients.

    When a client wants to access a file it first queries the metadata server which provides it with a map of where to find the data and with credentials regarding its rights to read, modify, and write the data. Once the client has those two components, it communicates directly to the storage devices when accessing the data. With traditional NFS every bit of data flows through the NFS server – with pNFS the NFS server is removed from the primary data path allowing free and fast access to data. All the advantages of NFS are maintained but bottlenecks are removed and data can be accessed in parallel allowing for very fast throughput rates; system capacity can be easily scaled without impacting overall performance."

    http://www.pnfs.com/

    http://wiki.linux-nfs.org/wiki/index.php/Main_Page

    http://wiki.linux-nfs.org/wiki/index.php/PNFS_Development

    Linux pNFS

    Linux pNFS features a pluggable client and server architecture that harnesses the potential of pNFS as a universal and scalable metadata protocol by enabling dynamic support for the file, object, and block layouts.  pNFS is part of the first NFSv4 minor version.

    Fedora pNFS Client Setup - http://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_Setup

    pNFS Block Server Setup - http://wiki.linux-nfs.org/wiki/index.php/PNFS_block_server_setup


    RTL-SDR

    "RTL-SDR is a very cheap software defined radio that uses a DVB-T TV tuner dongle based on the RTL2832U chipset. With the combined efforts of Antti Palosaari, Eric Fry and Osmocom it was found that the signal I/Q data could be accessed directly, which allowed the DVB-T TV tuner to be converted into a wideband software defined radio via a new software driver.

    Essentially, this means that a cheap $20 TV tuner USB dongle with the RTL2832U chip can be used as a computer based radio scanner. This sort of scanner capability would have cost hundreds or even thousands of dollars just a few years ago. The RTL-SDR is also often referred to as RTL2832U, DVB-T SDR, RTL dongle or the “$20 Software Defined Radio”.

    There are many other software defined radios better than the RTL-SDR, but they all come at a higher price. Currently we think that the Airspy ($199) and SDRPlay ($149) SDR’s are the best low cost RX only SDR’s. Then there are the HackRF ($300USD) and BladeRF SDRs ($420 and $650), which can both transmit and receive.

    The RTL-SDR can be used as a wide band radio scanner. Applications include:
    Furthermore, with an upconverter or direct sampling mod to receive HF signals the applications are expanded to:
    • Listening to amateur radio hams on SSB with LSB/USB modulation.
    • Decoding digital amateur radio ham communications such as CW/PSK/RTTY/SSTV.
    • Receiving HF weatherfax.
    • Receiving digital radio monodial shortwave radio (DRM).
    • Listening to international shortwave radio.
    • Looking for RADAR signals like over the horizon (OTH) radar, and HAARP signals.
    Note that not all the applications listed may be legal in your country. Please be responsible."

    http://www.rtl-sdr.com/

    https://en.wikipedia.org/wiki/Software-defined_radio

    http://osmocom.org/projects/sdr/wiki/rtl-sdr

    https://rtlsdr.org/

    https://www.facebook.com/rtlsdrblog/