Where the chairs are arranged with exquisite precision, and the rosin bag is always full. Or perhaps (yet) another attempt to keep track of those things of which we think we need to keep track.
Thursday, November 17, 2016
git docs
Git for Authors - https://mathbook.pugetsound.edu/gfa/html/git-for-authors.html
giteveryday - A useful minimum set of commands for Everyday Git - https://www.kernel.org/pub/software/scm/git/docs/giteveryday.html
Tuesday, November 15, 2016
FlyWeb
"In short, FlyWeb provides an API for web pages to host local web servers for exposing content and services to nearby browsers. It also adds the ability to discover and connect to nearby local web servers to the web browser itself. This feature allows users to find and connect to nearby devices with embedded web servers such as printers, thermostats and televisions as well as local web servers hosted in web pages via the FlyWeb API.
Enabling web pages to host local servers and providing the ability for the web browser to discover nearby servers opens up a whole new range of use cases for web apps. With FlyWeb, we can finally reach a level of richness in cross-device interactions previously only attainable via native apps. In addition, the built-in service discovery feature in the browser offers device makers and hobbyists a new way to leverage existing web technologies for users to interact with devices across all platforms."
https://flyweb.github.io/posts/2016/11/01/introducing-flyweb.html
https://flyweb.github.io/#home
Enabling web pages to host local servers and providing the ability for the web browser to discover nearby servers opens up a whole new range of use cases for web apps. With FlyWeb, we can finally reach a level of richness in cross-device interactions previously only attainable via native apps. In addition, the built-in service discovery feature in the browser offers device makers and hobbyists a new way to leverage existing web technologies for users to interact with devices across all platforms."
https://flyweb.github.io/posts/2016/11/01/introducing-flyweb.html
https://flyweb.github.io/#home
PiBakery
"The key feature of PiBakery is the ability to create a customised version of Raspbian that you write directly to your Raspberry Pi. This works by creating a set of scripts that run when the Raspberry Pi has been powered on, meaning that your Pi can automatically perform setup tasks, and you don't need to configure anything.
The scripts are created using a block based interface that is very similar to Scratch. If you've used Scratch before, you already know how to use PiBakery. Simply drag and drop the different tasks that you want your Raspberry Pi to perform, and they'll be turned into scripts and written to your SD card. As soon as the Pi boots up, the scripts will be run."
http://www.pibakery.org/
https://hackaday.com/2016/11/04/bake-a-fresh-raspberry-pi-never-struggle-to-configure-a-pi-again/
The scripts are created using a block based interface that is very similar to Scratch. If you've used Scratch before, you already know how to use PiBakery. Simply drag and drop the different tasks that you want your Raspberry Pi to perform, and they'll be turned into scripts and written to your SD card. As soon as the Pi boots up, the scripts will be run."
http://www.pibakery.org/
https://hackaday.com/2016/11/04/bake-a-fresh-raspberry-pi-never-struggle-to-configure-a-pi-again/
Leibniz Digital Scientific Notation
"Leibniz is an attempt to define a digital scientific notation, i.e. a formal language for writing down scientific models in terms of equations and algorithms. Such models can be published, cited, and discussed, in addition to being manipulated by software.
Although Leibniz can express algorithms, it is not a programming language. It is more similar to a specification language in that it allows to express what some program is supposed to compute."
https://github.com/khinsen/leibniz
Scientific notations for the digital era - http://sjscience.org/article?id=527
https://www.guaana.com/projects/scientific-notations-for-the-digital-era
Verifiable research - https://thewinnower.com/papers/4770-verifiable-research-the-missing-link-between-replicability-and-reproducibility
https://zenodo.org/
http://nanopub.org/wordpress/
Although Leibniz can express algorithms, it is not a programming language. It is more similar to a specification language in that it allows to express what some program is supposed to compute."
https://github.com/khinsen/leibniz
Scientific notations for the digital era - http://sjscience.org/article?id=527
https://www.guaana.com/projects/scientific-notations-for-the-digital-era
Verifiable research - https://thewinnower.com/papers/4770-verifiable-research-the-missing-link-between-replicability-and-reproducibility
https://zenodo.org/
http://nanopub.org/wordpress/
Readable Lisp S-expressions Project
"The goal of this “Readable Lisp s-expressions” project is to develop, implement, and gain widespread adoption of more readable format(s) for the S-expressions of Lisp-based languages (such as Common Lisp, Scheme, Emacs Lisp, and Arc). We’ve done this by creating new abbreviations that can be added to existing readers. Curly-infix-expressions add infix expressions (in a Lispy way): {a op b ...} maps to (op a b ...). Neoteric-expressions also add more traditional function call notation: f(...) maps to (f ...). Finally, sweet-expressions also add deducing parentheses from indentation. You can choose a subset (e.g., you can just add infix expressions without using indentation)."
http://readable.sourceforge.net/
http://readable.sourceforge.net/
Friday, November 11, 2016
Global warming disaster could suffocate life on planet Earth, research shows
The response of the oceanic biota has been a relatively unknown factor until recently, when we discovered that the Great Barrier Reef is rapidly becoming the Doornail Barrier Reef. If you want a positive spin, hope for the oxygen producers in the ocean to evolve to handle the massive temperature change, or perhaps for the Einstein and Bohr of biology to pave the way for genetic engineering to be a savior.
"Falling oxygen levels caused by global warming could be a greater threat to the survival of life on planet Earth than flooding, according to researchers from the University of Leicester.
A study led by Sergei Petrovskii, Professor in Applied Mathematics from the University of Leicester’s Department of Mathematics, has shown that an increase in the water temperature of the world’s oceans of around six degrees Celsius – which some scientists predict could occur as soon as 2100 - could stop oxygen production by phytoplankton by disrupting the process of photosynthesis."
https://www2.le.ac.uk/offices/press/press-releases/2015/december/global-warming-disaster-could-suffocate-life-on-planet-earth-research-shows
"Falling oxygen levels caused by global warming could be a greater threat to the survival of life on planet Earth than flooding, according to researchers from the University of Leicester.
A study led by Sergei Petrovskii, Professor in Applied Mathematics from the University of Leicester’s Department of Mathematics, has shown that an increase in the water temperature of the world’s oceans of around six degrees Celsius – which some scientists predict could occur as soon as 2100 - could stop oxygen production by phytoplankton by disrupting the process of photosynthesis."
https://www2.le.ac.uk/offices/press/press-releases/2015/december/global-warming-disaster-could-suffocate-life-on-planet-earth-research-shows
Nonlinear climate sensitivity and its implications for future greenhouse warming
Nobody I know in the climate research community saw this coming five years - or even a couple of years - ago, or at least those who did buried it in denial. Even the supposed worst-case scenario in the IPCC reports is almost certainly a low-ball joke. The freight train is on the tracks and the throttle is stuck on full. What the well-paid global warming deniers have been calling a hysterical worst-case scenario for a couple of decades is becoming in fact the exact Panglossian opposite.
"According to the current best estimate, by the Intergovernmental Panel on Climate Change (IPCC), if humans carry on with a “business as usual” approach using large amounts of fossil fuels, the Earth’s average temperature will rise by between 2.6 and 4.8 degrees above pre-industrial levels by 2100.
However new research by an international team of experts who looked into how the Earth’s climate has reacted over nearly 800,000 years warns this could be a major under-estimate.
Because, they believe, the climate is more sensitive to greenhouse gases when it is warmer.
In a paper in the journal Science Advances, they said the actual range could be between 4.78C to 7.36C by 2100, based on one set of calculations."
http://www.independent.co.uk/news/science/climate-change-game-over-global-warming-climate-sensitivity-seven-degrees-a7407881.html
http://advances.sciencemag.org/content/2/11/e1501923
"According to the current best estimate, by the Intergovernmental Panel on Climate Change (IPCC), if humans carry on with a “business as usual” approach using large amounts of fossil fuels, the Earth’s average temperature will rise by between 2.6 and 4.8 degrees above pre-industrial levels by 2100.
However new research by an international team of experts who looked into how the Earth’s climate has reacted over nearly 800,000 years warns this could be a major under-estimate.
Because, they believe, the climate is more sensitive to greenhouse gases when it is warmer.
In a paper in the journal Science Advances, they said the actual range could be between 4.78C to 7.36C by 2100, based on one set of calculations."
http://www.independent.co.uk/news/science/climate-change-game-over-global-warming-climate-sensitivity-seven-degrees-a7407881.html
http://advances.sciencemag.org/content/2/11/e1501923
Wednesday, October 26, 2016
Bedrock
"Bedrock is a simple, modular, WAN-replicated data foundation for global-scale applications.
Bedrock was built by Expensify, and is a networking and distributed transaction layer built atop SQLite, the fastest, most reliable, and most widely distributed database in the world."
http://bedrockdb.com/
https://github.com/Expensify/Bedrock
Bedrock was built by Expensify, and is a networking and distributed transaction layer built atop SQLite, the fastest, most reliable, and most widely distributed database in the world."
http://bedrockdb.com/
https://github.com/Expensify/Bedrock
mbed
"ARM mbed OS is an open source embedded operating system designed specifically for the "things" in the Internet of Things.
It includes all the features you need to develop a
connected product based on an ARM Cortex-M microcontroller, including
security, connectivity, an RTOS, and drivers for sensors and I/O
devices.
Monday, October 17, 2016
D4M.jl
"A Dynamic Distributed Dimensional Data Model(D4M) module for Julia.
D4M is a breakthrough in computer programming that combines the advantages of five distinct processing technologies (sparse linear algebra, associative arrays, fuzzy algebra, distributed arrays, and triple-store/NoSQL databases such as Hadoop HBase and Apache Accumulo) to provide a database and computation system that addresses the problems associated with Big Data. D4M significantly improves search, retrieval, and analysis for any business or service that relies on accessing and exploiting massive amounts of digital data."
https://github.com/achen12/D4M.jl
http://www.mit.edu/%7Ekepner/D4M/
Julia Implementation of the Dynamic Distributed Dimensional Data Model - https://arxiv.org/abs/1608.04041
Introducing D3 Science: Understanding Applications and Infrastructure - https://arxiv.org/abs/1609.03647
D4M 3.0 - https://arxiv.org/abs/1702.03253
D4M is a breakthrough in computer programming that combines the advantages of five distinct processing technologies (sparse linear algebra, associative arrays, fuzzy algebra, distributed arrays, and triple-store/NoSQL databases such as Hadoop HBase and Apache Accumulo) to provide a database and computation system that addresses the problems associated with Big Data. D4M significantly improves search, retrieval, and analysis for any business or service that relies on accessing and exploiting massive amounts of digital data."
https://github.com/achen12/D4M.jl
http://www.mit.edu/%7Ekepner/D4M/
Julia Implementation of the Dynamic Distributed Dimensional Data Model - https://arxiv.org/abs/1608.04041
Introducing D3 Science: Understanding Applications and Infrastructure - https://arxiv.org/abs/1609.03647
D4M 3.0 - https://arxiv.org/abs/1702.03253
Futhark
"Futhark is a small programming language designed to be compiled to efficient GPU code. It is a statically typed, data-parallel, and purely functional array language, and comes with a heavily optimising ahead-of-time compiler that generates GPU code via OpenCL.
Futhark is not designed for graphics programming, but instead uses the
compute power of the GPU to accelerate data-parallel array computations.
We support regular nested data-parallelism,
as well as a form of imperative-style in-place modification of arrays,
while still preserving the purity of the language via the use of a uniqueness type system.
Futhark is not intended to replace your existing languages. Our intended use case is that Futhark is only used for relatively small but compute-intensive parts of an application. The Futhark compiler generates code that can be easily integrated with non-Futhark code. For example, you can compile a Futhark program to a Python module that internally uses PyOpenCL to execute code on the GPU, yet looks like any other Python module from the outside (more on this here). The Futhark compiler will also generate more conventional C code, which can be accessed from any language with a basic FFI."
http://futhark-lang.org/
https://github.com/diku-dk/futhark/
https://github.com/HIPERFIT/futhark
APL on GPUs: A TAIL from the Past, Scribbled in Futhark - http://hgpu.org/?p=16592
https://github.com/HIPERFIT/futhark-fhpc16
https://github.com/melsman/apltail/
Purely Functional GPU Programming with Futhark - https://fosdem.org/2017/schedule/event/functional_gpu_futhark/
Design and Implementation of the Futhark Programming Language - https://hgpu.org/?p=17903
Futhark is not intended to replace your existing languages. Our intended use case is that Futhark is only used for relatively small but compute-intensive parts of an application. The Futhark compiler generates code that can be easily integrated with non-Futhark code. For example, you can compile a Futhark program to a Python module that internally uses PyOpenCL to execute code on the GPU, yet looks like any other Python module from the outside (more on this here). The Futhark compiler will also generate more conventional C code, which can be accessed from any language with a basic FFI."
http://futhark-lang.org/
https://github.com/diku-dk/futhark/
https://github.com/HIPERFIT/futhark
APL on GPUs: A TAIL from the Past, Scribbled in Futhark - http://hgpu.org/?p=16592
https://github.com/HIPERFIT/futhark-fhpc16
https://github.com/melsman/apltail/
Purely Functional GPU Programming with Futhark - https://fosdem.org/2017/schedule/event/functional_gpu_futhark/
Design and Implementation of the Futhark Programming Language - https://hgpu.org/?p=17903
Friday, October 14, 2016
GPU & DB Literature
GPU-accelerated database systems: Survey and open challenges
http://hgpu.org/?p=12738
Overtaking CPU DBMSes with a GPU in Whole-Query Analytic Processing with Parallelism-Friendly Execution Plan Optimization
http://hgpu.org/?p=16615
Parallel Inception
https://archive.fosdem.org/2016/schedule/event/hpc_bigdata_mpp/
https://github.com/kdunn926/plpygpgpu/blob/master/notebook.ipynb
http://hgpu.org/?p=12738
Overtaking CPU DBMSes with a GPU in Whole-Query Analytic Processing with Parallelism-Friendly Execution Plan Optimization
http://hgpu.org/?p=16615
Parallel Inception
https://archive.fosdem.org/2016/schedule/event/hpc_bigdata_mpp/
https://github.com/kdunn926/plpygpgpu/blob/master/notebook.ipynb
CUDAnative
"This package provides support for compiling and executing native Julia kernels on CUDA
hardware. It is a work in progress, highly experimental, and for now requires a version of
Julia capable of generating PTX code (ie. the fork at
JuliaGPU/julia)."
https://github.com/JuliaGPU/CUDAnative.jl
https://github.com/JuliaGPU/CUDAnative.jl
Wednesday, October 12, 2016
pandasql
"pandasql allows you to query pandas DataFrames using SQL syntax. It works similarly to sqldf in R. pandasql seeks to provide a more familiar way of manipulating and cleaning data for people new to Python or pandas."
https://github.com/yhat/pandasql
http://blog.yhat.com/posts/pandasql-intro.html
https://github.com/yhat/pandasql
http://blog.yhat.com/posts/pandasql-intro.html
Ultibo
"Ultibo core is an embedded or bare metal development environment for Raspberry Pi. It is not an operating system but provides many of the same services as an OS, things like memory management, networking, filesystems and threading plus much more. So you don’t have to start from scratch just to create your ideas."
https://ultibo.org/
https://ultibo.org/
Vagrant
"Vagrant is a tool for building and distributing development environments.
Development environments managed by Vagrant can run on local virtualized platforms such as VirtualBox or VMware, in the cloud via AWS or OpenStack, or in containers such as with Docker or raw LXC.
Vagrant provides the framework and configuration format to create and manage complete portable development environments. These development environments can live on your computer or in the cloud, and are portable between Windows, Mac OS X, and Linux.
Vagrant is an open-source software product for building and maintaining portable virtual development environments.[4] The core idea behind its creation lies in the fact that the environment maintenance becomes increasingly difficult in a large project with multiple technical stacks. Vagrant manages all the necessary configurations for the developers in order to avoid the unnecessary maintenance and setup time, and increases development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in almost all major languages.
Vagrant uses "Provisioners" and "Providers" as building blocks to manage the development environments. Provisioners are tools that allow users to customize the configuration of virtual environments. Puppet and Chef are the two most widely used provisioners in the Vagrant ecosystem. Providers are the services that Vagrant uses to set up and create virtual environments. Support for VirtualBox, Hyper-V, and Docker virtualization ships with Vagrant, while VMware and AWS are supported via plugins.
Vagrant sits on top of virtualization software as a wrapper and helps the developer interact easily with the providers. It automates the configuration of virtual environments using Chef or Puppet, and the user does not have to directly use any other virtualization software. Machine and software requirements are written in a file called "Vagrantfile" to execute necessary steps in order to create a development-ready box. Box is a format and an extension ( .box) for Vagrant environments that is copied to another machine in order to replicate the same environment."
https://github.com/mitchellh/vagrant
https://www.vagrantup.com/
Vagrant Tutorial - https://manski.net/2016/09/vagrant-multi-machine-tutorial/
How to Create a CentOS Vagrant Base Box - https://github.com/ckan/ckan/wiki/How-to-Create-a-CentOS-Vagrant-Base-Box
Using Ansible to Provision Vagrant Boxes - https://fedoramagazine.org/using-ansible-provision-vagrant-boxes/
Development environments managed by Vagrant can run on local virtualized platforms such as VirtualBox or VMware, in the cloud via AWS or OpenStack, or in containers such as with Docker or raw LXC.
Vagrant provides the framework and configuration format to create and manage complete portable development environments. These development environments can live on your computer or in the cloud, and are portable between Windows, Mac OS X, and Linux.
Vagrant is an open-source software product for building and maintaining portable virtual development environments.[4] The core idea behind its creation lies in the fact that the environment maintenance becomes increasingly difficult in a large project with multiple technical stacks. Vagrant manages all the necessary configurations for the developers in order to avoid the unnecessary maintenance and setup time, and increases development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in almost all major languages.
Vagrant uses "Provisioners" and "Providers" as building blocks to manage the development environments. Provisioners are tools that allow users to customize the configuration of virtual environments. Puppet and Chef are the two most widely used provisioners in the Vagrant ecosystem. Providers are the services that Vagrant uses to set up and create virtual environments. Support for VirtualBox, Hyper-V, and Docker virtualization ships with Vagrant, while VMware and AWS are supported via plugins.
Vagrant sits on top of virtualization software as a wrapper and helps the developer interact easily with the providers. It automates the configuration of virtual environments using Chef or Puppet, and the user does not have to directly use any other virtualization software. Machine and software requirements are written in a file called "Vagrantfile" to execute necessary steps in order to create a development-ready box. Box is a format and an extension ( .box) for Vagrant environments that is copied to another machine in order to replicate the same environment."
https://github.com/mitchellh/vagrant
https://www.vagrantup.com/
Vagrant Tutorial - https://manski.net/2016/09/vagrant-multi-machine-tutorial/
How to Create a CentOS Vagrant Base Box - https://github.com/ckan/ckan/wiki/How-to-Create-a-CentOS-Vagrant-Base-Box
Using Ansible to Provision Vagrant Boxes - https://fedoramagazine.org/using-ansible-provision-vagrant-boxes/
Virtualize OSX on Linux
"I've been a Linux user for something like 10 years now. In order to develop and maintain psutil on different platforms I've been using the excellent VirtualBox. With it, during the years, I've been able to virtualize different versions of Windows, FreeBSD, OpenBSD, NetBSD and Solaris and implement and protract support for such platforms inside psutil. Without VirtualBox there really wouldn't exist psutil as it stands nowadays.
At some point I also managed to virtualize OSX by using an hacked version of OSX called iDeneb which is based on OSX 10.5 / Leopard (note: 9 years old), and that is what I've been using up until today. Of course such an old hacked version of OSX isn't nice to deal with. It ships Python 2.5, it kernel panicks, I had to reinstall it from scratch quite often.
I'm really not sure how I could have been missing this for all this time, but it turns out emulating OSX on Linux really is as easy as executing a one-liner:
vagrant init AndrewDryga/vagrant-box-osx; vagrant up
And that really is it! I mean... you're literally good to go and start developing! That will create a Vagrant file, download a pre-configured OSX image via internet (10GB or something) and finally run it in VirtualBox. The whole package includes:
OSX 10.10.4 / Yosemite
XCode 6.4 + gcc
brew
Python 2.7
In a couple of hours I modified the original Vagrantfile a little and managed to mount a directory which is shared between the VM and the host (my laptop) and ended up with this Vagrantfile."
http://grodola.blogspot.com/2016/10/virtualize-osx-on-linux_53.html
https://atlas.hashicorp.com/AndrewDryga/boxes/vagrant-box-osx/
At some point I also managed to virtualize OSX by using an hacked version of OSX called iDeneb which is based on OSX 10.5 / Leopard (note: 9 years old), and that is what I've been using up until today. Of course such an old hacked version of OSX isn't nice to deal with. It ships Python 2.5, it kernel panicks, I had to reinstall it from scratch quite often.
I'm really not sure how I could have been missing this for all this time, but it turns out emulating OSX on Linux really is as easy as executing a one-liner:
vagrant init AndrewDryga/vagrant-box-osx; vagrant up
And that really is it! I mean... you're literally good to go and start developing! That will create a Vagrant file, download a pre-configured OSX image via internet (10GB or something) and finally run it in VirtualBox. The whole package includes:
OSX 10.10.4 / Yosemite
XCode 6.4 + gcc
brew
Python 2.7
In a couple of hours I modified the original Vagrantfile a little and managed to mount a directory which is shared between the VM and the host (my laptop) and ended up with this Vagrantfile."
http://grodola.blogspot.com/2016/10/virtualize-osx-on-linux_53.html
https://atlas.hashicorp.com/AndrewDryga/boxes/vagrant-box-osx/
Wednesday, October 5, 2016
pyMIC
"Python module to offload computation in a Python program to the Intel
Xeon Phi coprocessor. It contains offloadable arrays and device
management functions. It supports invocation of native kernels (C/C++,
Fortran) and blends in with Numpy's array types for
https://github.com/01org/pyMIC
https://software.intel.com/en-us/articles/pymic-a-python-offload-module-for-the-intelr-xeon-phitm-coprocessor
https://www.euroscipy.org/2015/schedule/presentation/9/
https://arxiv.org/abs/1607.00844
float
, complex
, and int
data types."https://github.com/01org/pyMIC
https://software.intel.com/en-us/articles/pymic-a-python-offload-module-for-the-intelr-xeon-phitm-coprocessor
https://www.euroscipy.org/2015/schedule/presentation/9/
https://arxiv.org/abs/1607.00844
Mininet
"Mininet creates a realistic virtual network, running
real kernel, switch and application code, on a single
machine (VM, cloud or native), in seconds,
with a single command."
http://mininet.org/
http://mininet.org/
Tuesday, October 4, 2016
conventions
NACDD - https://geo-ide.noaa.gov/wiki/index.php?title=NetCDF_Attribute_Convention_for_Dataset_Discovery
NCEI NetCDF Templates - https://www.nodc.noaa.gov/data/formats/netcdf/v2.0/
NetCDF CF - http://cfconventions.org/
UGRID - https://github.com/ugrid-conventions/ugrid-conventions
OpenHPC
"OpenHPC is a collaborative, community effort that initiated from a
desire to aggregate a number of common ingredients required to deploy
and manage High Performance Computing (HPC) Linux clusters including
provisioning tools, resource management, I/O clients, development tools,
and a variety of scientific libraries. Packages provided by OpenHPC
have been pre-built with HPC integration in mind with a goal to provide
re-usable building blocks for the HPC community. Over time, the
community also plans to identify and develop abstraction interfaces
between key components to further enhance modularity and
interchangeability. The community includes representation from a variety
of sources including software vendors, equipment manufacturers,
research institutions, supercomputing sites, and others.
OpenHPC provides pre-built binaries via repositories for use with standard Linux package manager tools (e.g.
http://openhpc.community/
https://github.com/pmodels/ohpc
https://build.openhpc.community/
https://www.nextplatform.com/2016/11/28/openhpc-pedal-put-compute-metal/
https://archive.fosdem.org/2016/schedule/event/hpc_bigdata_openhpc/
OpenHPC provides pre-built binaries via repositories for use with standard Linux package manager tools (e.g.
yum
or zypper
). Package
repositories are housed at https://build.openhpc.community. To get started, you
can enable an OpenHPC repository locally through installation of an
ohpc-release
RPM which includes gpg keys for package signing and defines
the URL locations for [base] and [update] package repositories."http://openhpc.community/
https://github.com/pmodels/ohpc
https://build.openhpc.community/
https://www.nextplatform.com/2016/11/28/openhpc-pedal-put-compute-metal/
https://archive.fosdem.org/2016/schedule/event/hpc_bigdata_openhpc/
BOLT
"BOLT targets a high-performing OpenMP implementation,
especially specialized for fine-grain parallelism. Unlike other OpenMP
implementations, BOLT utilizes a lightweight threading model for its
underlying threading mechanism. It currently adopts Argobots,
a new holistic, low-level threading and tasking runtime, in order to
overcome shortcomings of conventional OS-level threads. Its runtime and
compiler are based on the OpenMP runtime and Clang in LLVM, respectively."
http://www.mcs.anl.gov/bolt/
https://github.com/pmodels/bolt-runtime
https://github.com/pmodels/argobots
https://wiki.mpich.org/mpich/index.php/MPI%2BArgobots
http://www.mcs.anl.gov/bolt/
https://github.com/pmodels/bolt-runtime
https://github.com/pmodels/argobots
https://wiki.mpich.org/mpich/index.php/MPI%2BArgobots
Sunday, October 2, 2016
pgAdmin
"pgAdmin 4 is a complete rewrite of pgAdmin, built using Python and Javascript/jQuery. A desktop runtime written in C++ with Qt allows it to run standalone for individual users, or the web application code may be deployed directly on a webserver for use by one or more users through their web browser. The software has the look and feel of a desktop application whatever the runtime environment is, and vastly improves on pgAdmin III with updated user interface elements, multi-user/web deployment options, dashboards and a more modern design."
https://www.pgadmin.org/
https://www.pgadmin.org/
gitless
"Gitless is an experimental version control system built on top of Git. Many people complain that Git is hard to use. We think the problem lies deeper than the user interface, in the concepts underlying Git. Gitless is an experiment to see what happens if you put a simple veneer on an app that changes the underlying concepts. Because Gitless is implemented on top of Git (could be considered what Git pros call a "porcelain" of Git), you can always fall back on Git. And of course your coworkers you share a repo with need never know that you're not a Git aficionado."
http://gitless.com/
http://gitless.com/
Friday, September 30, 2016
asyncpg
"A database interface library designed specifically for
PostgreSQL and Python/asyncio. asyncpg is an efficient, clean implementation
of PostgreSQL server binary protocol for use with Python's
https://github.com/MagicStack/asyncpg
https://magic.io/blog/asyncpg-1m-rows-from-postgres-to-python/
https://magicstack.github.io/asyncpg/current/
asyncio
framework."https://github.com/MagicStack/asyncpg
https://magic.io/blog/asyncpg-1m-rows-from-postgres-to-python/
https://magicstack.github.io/asyncpg/current/
Thursday, September 29, 2016
Devito
"Devito is a new tool for performing optimised Finite Difference (FD)
computation from high-level symbolic problem definitions. Devito
performs automated code generation and Just-In-time (JIT) compilation
based on symbolic equations defined in
SymPy to create and execute
highly optimised Finite Difference kernels on multiple computer
platforms."
https://github.com/opesci/devito
Devito: automated fast finite difference computation - http://hgpu.org/?p=16561
https://github.com/opesci/devito
Devito: automated fast finite difference computation - http://hgpu.org/?p=16561
OgmaNeo
"Efforts at understanding the computational processes in the brain have met with limited success, despite their importance and potential uses in building intelligent machines. We propose a simple new model which draws on recent findings in Neuroscience and the Applied Mathematics of interacting Dynamical Systems. The Feynman Machine is a Universal Computer for Dynamical Systems, analogous to the Turing Machine for symbolic computing, but with several important differences. We demonstrate that networks and hierarchies of simple interacting Dynamical Systems, each adaptively learning to forecast its evolution, are capable of automatically building sensorimotor models of the external and internal world. We identify such networks in mammalian neocortex, and show how existing theories of cortical computation combine with our model to explain the power and flexibility of mammalian intelligence. These findings lead directly to new architectures for machine intelligence. A suite of software implementations has been built based on these principles, and applied to a number of spatiotemporal learning tasks."
https://arxiv.org/abs/1609.03971v1
https://github.com/ogmacorp
https://arxiv.org/abs/1609.03971v1
https://github.com/ogmacorp
Wednesday, September 28, 2016
dask.distributed
"Dask.distributed is a lightweight library for distributed computing in Python.
It extends both the
The scheduler is asynchronous and event driven, simultaneously responding to requests for computation from multiple clients and tracking the progress of multiple workers. The event-driven and asynchronous nature makes it flexible to concurrently handle a variety of workloads coming from multiple users at the same time while also handling a fluid worker population with failures and additions. Workers communicate amongst each other for bulk data transfer over TCP.
Internally the scheduler tracks all work as a constantly changing directed acyclic graph of tasks. A task is a Python function operating on Python objects, which can be the results of other tasks. This graph of tasks grows as users submit more computations, fills out as workers complete tasks, and shrinks as users leave or become disinterested in previous results.
Users interact by connecting a local Python session to the scheduler and submitting work, either by individual calls to the simple interface
http://distributed.readthedocs.io/en/latest/
https://matthewrocklin.com/blog//work/2016/09/22/cluster-deployments
concurrent.futures
and dask
APIs to moderate sized
clusters.Distributed
serves to complement the existing PyData analysis stack.
In particular it meets the following needs:- Low latency: Each task suffers about 1ms of overhead. A small computation and network roundtrip can complete in less than 10ms.
- Peer-to-peer data sharing: Workers communicate with each other to share data. This removes central bottlenecks for data transfer.
- Complex Scheduling: Supports complex workflows (not just map/filter/reduce) which are necessary for sophisticated algorithms used in nd-arrays, machine learning, image processing, and statistics.
- Pure Python: Built in Python using well-known technologies. This eases installation, improves efficiency (for Python users), and simplifies debugging.
- Data Locality: Scheduling algorithms cleverly execute computations where data lives. This minimizes network traffic and improves efficiency.
- Familiar APIs: Compatible with the concurrent.futures API in the Python standard library. Compatible with dask API for parallel algorithms
- Easy Setup: As a Pure Python package distributed is
pip
installable and easy to set up on your own cluster.
dask-scheduler
process coordinates the actions of several
dask-worker
processes spread across multiple machines and the concurrent
requests of several clients.The scheduler is asynchronous and event driven, simultaneously responding to requests for computation from multiple clients and tracking the progress of multiple workers. The event-driven and asynchronous nature makes it flexible to concurrently handle a variety of workloads coming from multiple users at the same time while also handling a fluid worker population with failures and additions. Workers communicate amongst each other for bulk data transfer over TCP.
Internally the scheduler tracks all work as a constantly changing directed acyclic graph of tasks. A task is a Python function operating on Python objects, which can be the results of other tasks. This graph of tasks grows as users submit more computations, fills out as workers complete tasks, and shrinks as users leave or become disinterested in previous results.
Users interact by connecting a local Python session to the scheduler and submitting work, either by individual calls to the simple interface
client.submit(function, *args, **kwargs)
or by using the large data
collections and parallel algorithms of the parent dask
library. The
collections in the dask library like dask.array and dask.dataframe
provide easy access to sophisticated algorithms and familiar APIs like NumPy
and Pandas, while the simple client.submit
interface provides users with
custom control when they want to break out of canned “big data” abstractions
and submit fully custom workloads."http://distributed.readthedocs.io/en/latest/
https://matthewrocklin.com/blog//work/2016/09/22/cluster-deployments
teleport
"Gravitational Teleport is a modern SSH server for remotely accessing
clusters of Linux servers via SSH or HTTPS. It is intended to be used
instead of
https://github.com/gravitational/teleport
https://github.com/gravitational/teleconsole
sshd
. Teleport enables teams to easily adopt the best SSH practices like:
- No need to distribute keys: Teleport uses certificate-based access with automatic expiration time.
- Enforcement of 2nd factor authentication.
- Cluster introspection: every Teleport node becomes a part of a cluster and is visible on the Web UI.
- Record and replay SSH sessions for knowledge sharing and auditing purposes.
- Collaboratively troubleshoot issues through session sharing.
- Connect to clusters located behind firewalls without direct Internet access via SSH bastions.
- Ability to integrate SSH credentials with your organization identities via OAuth (Google Apps, Github).
https://github.com/gravitational/teleport
https://github.com/gravitational/teleconsole
Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen
"This post presents performance benchmarks for general matrix-matrix multiplication
between Mir GLAS, OpenBLAS,
Eigen, and two closed source BLAS implementations from Intel and Apple.
OpenBLAS is the default BLAS implementation for most numeric and scientific projects, for example the Julia Programing Language and NumPy. The OpenBLAS Haswell computation kernels were written in assembler.
Mir is an LLVM-Accelerated Generic Numerical Library for Science and Machine Learning. It requires LDC (LLVM D Compiler) for compilation. Mir GLAS (Generic Linear Algebra Subprograms) has a single generic kernel for all CPU targets, all floating point types, and all complex types. It is written completely in D, without any assembler blocks. In addition, Mir GLAS Level 3 kernels are not unrolled and produce tiny binary code, so they put less pressure on the instruction cache in large applications."
http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/glas-gemm-benchmark.html
https://github.com/libmir/mir
OpenBLAS is the default BLAS implementation for most numeric and scientific projects, for example the Julia Programing Language and NumPy. The OpenBLAS Haswell computation kernels were written in assembler.
Mir is an LLVM-Accelerated Generic Numerical Library for Science and Machine Learning. It requires LDC (LLVM D Compiler) for compilation. Mir GLAS (Generic Linear Algebra Subprograms) has a single generic kernel for all CPU targets, all floating point types, and all complex types. It is written completely in D, without any assembler blocks. In addition, Mir GLAS Level 3 kernels are not unrolled and produce tiny binary code, so they put less pressure on the instruction cache in large applications."
http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/glas-gemm-benchmark.html
https://github.com/libmir/mir
Tuesday, September 20, 2016
ssd2gpu
"A kernel module to support SSD-to-GPU direct DMA.
NVMe-Strom is a Linux kernel module which provides the SSD-to-GPU direct DMA. It allows to (1) map a particular GPU device memory on PCI BAR memory area, and (2) launch P2P DMA from the source file blocks to the mapped GPU device memory without intermediation by the main memory.
Requirements
http://kaigai.hatenablog.com/entry/2016/09/08/003556
NVMe-Strom is a Linux kernel module which provides the SSD-to-GPU direct DMA. It allows to (1) map a particular GPU device memory on PCI BAR memory area, and (2) launch P2P DMA from the source file blocks to the mapped GPU device memory without intermediation by the main memory.
Requirements
- NVIDIA Tesla or Quadro GPU
- NVMe SSD
- Red Hat Enterprise Linux 7.x, or compatible kernel
- Ext4 or XFS filesystem on the raw block device (Any RAID should not be constructed on the device)"
http://kaigai.hatenablog.com/entry/2016/09/08/003556
Thursday, September 15, 2016
PSyclone
"PSyclone is a code generation and optimisation environment for the GungHo PSy layer.
GungHo proposes to separate model code into 3 layers, the algorithm layer, the Psy layer and the kernel layer. This approach is called psykal (for PSY, Kernel, ALgorithm). The idea behind psykal is to separate science code, which should be invariant computational resources, from code optimisations, which are often machine specific. The hope is that this separation will lead to understandable and maintainable code whilst providing performance portability as code can be optimised for the required architecture(s).
The Algorithm layer implements a codes algorithm at a relatively high level in terms of logically global fields, control structures and calls to kernel routines.
The Kernel layer implements the underlying science. Kernel operate on a subset of a field, typically a column, or set of columns. The Kernel operates on raw (fortran) arrays; this is primarily for performance reasons.
The PSy layer sits in-between the Algorithm layer and the Kernel layer. It's functional responsibilities are to
The PSy layer is also where any single node performance optimisations take place, such as OpenMP parallelisation for many-core architectures or OpenACC parallelisation GPU's. Please note that, the internode, distributed memory partitioning (typically MPI with domain decomposition) is taken care of separately.
PSyclone is a tool that generates PSy layer code. This is achieved by parsing the algorithm layer code to determine the order in which kernels are called and parsing metadata about the kernels themselves. In addition to generating correct PSy code, PSyclone offers a set of optimising transformations which can be used to optimise the performance of the PSy layer."
https://puma.nerc.ac.uk/trac/GungHo/wiki/PSyclone
https://puma.nerc.ac.uk/trac/GungHo
https://github.com/stfc/PSyclone
Unique code-generating software makes weather and climate forecasting easier -
https://www.scientific-computing.com/news/unique-code-generating-software-makes-weather-and-climate-forecasting-easier
GungHo proposes to separate model code into 3 layers, the algorithm layer, the Psy layer and the kernel layer. This approach is called psykal (for PSY, Kernel, ALgorithm). The idea behind psykal is to separate science code, which should be invariant computational resources, from code optimisations, which are often machine specific. The hope is that this separation will lead to understandable and maintainable code whilst providing performance portability as code can be optimised for the required architecture(s).
The Algorithm layer implements a codes algorithm at a relatively high level in terms of logically global fields, control structures and calls to kernel routines.
The Kernel layer implements the underlying science. Kernel operate on a subset of a field, typically a column, or set of columns. The Kernel operates on raw (fortran) arrays; this is primarily for performance reasons.
The PSy layer sits in-between the Algorithm layer and the Kernel layer. It's functional responsibilities are to
- map from the global view of the algorithm layer to the field-subset view of the kernel layer by iterating over the appropriate space (typically mesh cells in a finite element implementation).
- map between the high level global field view of data at the algorithm layer and the low level local (fortran) array view of data at the kernel layer.
- provide any additional required arguments to the Kernel layer, such as dofmaps and quadrature values.
- add appropriate halo calls, reduction variables and global sums to ensure correct operation of the parallel code.
The PSy layer is also where any single node performance optimisations take place, such as OpenMP parallelisation for many-core architectures or OpenACC parallelisation GPU's. Please note that, the internode, distributed memory partitioning (typically MPI with domain decomposition) is taken care of separately.
PSyclone is a tool that generates PSy layer code. This is achieved by parsing the algorithm layer code to determine the order in which kernels are called and parsing metadata about the kernels themselves. In addition to generating correct PSy code, PSyclone offers a set of optimising transformations which can be used to optimise the performance of the PSy layer."
https://puma.nerc.ac.uk/trac/GungHo/wiki/PSyclone
https://puma.nerc.ac.uk/trac/GungHo
https://github.com/stfc/PSyclone
Unique code-generating software makes weather and climate forecasting easier -
https://www.scientific-computing.com/news/unique-code-generating-software-makes-weather-and-climate-forecasting-easier
PiBakery
"The key feature of PiBakery is the ability to create a customised version of Raspbian that you write directly to your Raspberry Pi. This works by creating a set of scripts that run when the Raspberry Pi has been powered on, meaning that your Pi can automatically perform setup tasks, and you don't need to configure anything.
The scripts are created using a block based interface that is very similar to Scratch. If you've used Scratch before, you already know how to use PiBakery. Simply drag and drop the different tasks that you want your Raspberry Pi to perform, and they'll be turned into scripts and written to your SD card. As soon as the Pi boots up, the scripts will be run."
http://www.pibakery.org/
The scripts are created using a block based interface that is very similar to Scratch. If you've used Scratch before, you already know how to use PiBakery. Simply drag and drop the different tasks that you want your Raspberry Pi to perform, and they'll be turned into scripts and written to your SD card. As soon as the Pi boots up, the scripts will be run."
http://www.pibakery.org/
F-Droid
"F-Droid is an installable catalogue of FOSS (Free and Open Source Software) applications for the Android platform. The client makes it easy to browse, install, and keep track of updates on your device."
https://f-droid.org/
https://f-droid.org/
monetdb
"When your database grows into millions of records spread over many tables and business intelligence/ science becomes the prevalent application domain, a column-store database management system is called for. Unlike traditional row-stores, such as MySQL and PostgreSQL, a column-store provides a modern and scalable solution without calling for substantial hardware investments.
MonetDB pioneered column-store solutions for high-performance data warehouses for business intelligence and eScience since 1993. It achieves its goal by innovations at all layers of a DBMS, e.g. a storage model based on vertical fragmentation, a modern CPU-tuned query execution architecture, automatic and adaptive indices, run-time query optimization, and a modular software architecture. It is based on the SQL 2003 standard with full support for foreign keys, joins, views, triggers, and stored procedures. It is fully ACID compliant and supports a rich spectrum of programming interfaces (JDBC, ODBC, PHP, Python, RoR, C/C++, Perl).
MonetDB is the focus of database research pushing the technology envelop in many areas. Its three-level software stack, comprised of SQL front-end, tactical-optimizers, and columnar abstract-machine kernel, provide a flexible environment to customize it many different ways. A rich collection of linked-in libraries provide functionality for temporal data types, math routine, strings, and URLs. In-depth information on the technical innovations in the design and implementation of MonetDB can be found in our science library. "
https://www.monetdb.org/Home
MonetDB pioneered column-store solutions for high-performance data warehouses for business intelligence and eScience since 1993. It achieves its goal by innovations at all layers of a DBMS, e.g. a storage model based on vertical fragmentation, a modern CPU-tuned query execution architecture, automatic and adaptive indices, run-time query optimization, and a modular software architecture. It is based on the SQL 2003 standard with full support for foreign keys, joins, views, triggers, and stored procedures. It is fully ACID compliant and supports a rich spectrum of programming interfaces (JDBC, ODBC, PHP, Python, RoR, C/C++, Perl).
MonetDB is the focus of database research pushing the technology envelop in many areas. Its three-level software stack, comprised of SQL front-end, tactical-optimizers, and columnar abstract-machine kernel, provide a flexible environment to customize it many different ways. A rich collection of linked-in libraries provide functionality for temporal data types, math routine, strings, and URLs. In-depth information on the technical innovations in the design and implementation of MonetDB can be found in our science library. "
https://www.monetdb.org/Home
htsql
"HTSQL is designed for data analysts and other accidental programmers who have complex business inquiries to solve and need a productive tool to write and share database queries.
HTSQL is a complete query language featuring automated linking, aggregation, projections, filters, macros, a compositional syntax, and a full set of data types & functions.
HTSQL is a web service that accepts queries as URLs, returning results formatted as HTML, JSON, CSV or XML. With HTSQL, databases can be accessed, secured, cached, and integrated using standard web technologies.
HTSQL requests are translated to efficient SQL queries. HTSQL supports different SQL dialects including SQLite, PostgreSQL, MySQL, Oracle, and Microsoft SQL Server."
http://htsql.org/
HTSQL is a complete query language featuring automated linking, aggregation, projections, filters, macros, a compositional syntax, and a full set of data types & functions.
HTSQL is a web service that accepts queries as URLs, returning results formatted as HTML, JSON, CSV or XML. With HTSQL, databases can be accessed, secured, cached, and integrated using standard web technologies.
HTSQL requests are translated to efficient SQL queries. HTSQL supports different SQL dialects including SQLite, PostgreSQL, MySQL, Oracle, and Microsoft SQL Server."
http://htsql.org/
BlazingDB
"BlazingDB is an extremely fast SQL database able to handle petabyte scale. BlazingDB heavily uses specialized, massively parallel co-processors, specifically graphics processors (GPUs). Blazing is a data science platform, that enables our users to run very large processes and jobs through Python, R, and SQL on super-charged GPU servers."
http://blazingdb.com/
http://blazingdb.com/
Tuesday, September 13, 2016
FPGA Bibliography
Survey of Domain-Specific Languages for FPGA Computing - http://hgpu.org/?p=16212
"High-performance FPGA programming has typically been the exclusive domain of a small band of specialized hardware developers. They are capable of reasoning about implementation concerns at the register-transfer level (RTL) which is analogous to assembly-level programming in software. Sometimes these developers are required to push further down to manage even lower levels of abstraction closer to physical aspects of the design such as detailed layout to meet critical design constraints. In contrast, software programmers have long since moved away from textual assembly-level programming towards relying on graphical integrated development environments (IDEs), highlevel compilers, smart static analysis tools and runtime systems that optimize, manage and assist the program development tasks. Domain-specific languages (DSLs) can bridge this productivity gap by providing higher levels of abstraction in environments close to the domain of application expert. DSLs carefully limit the set of programming constructs to minimize programmer mistakes while also enabling a rich set of domain-specific optimizations and program transformations. With a large number of DSLs to choose from, an inexperienced FPGA user may be confused about how to select an appropriate one for the intended domain. In this paper, we review a combination of legacy and state-ofthe-art DSLs available for FPGA development and provide a taxonomy and classification to guide selection and correct use of the framework."
A Survey of FPGA Based Neural Network Accelerator - https://hgpu.org/?p=17900
"High-performance FPGA programming has typically been the exclusive domain of a small band of specialized hardware developers. They are capable of reasoning about implementation concerns at the register-transfer level (RTL) which is analogous to assembly-level programming in software. Sometimes these developers are required to push further down to manage even lower levels of abstraction closer to physical aspects of the design such as detailed layout to meet critical design constraints. In contrast, software programmers have long since moved away from textual assembly-level programming towards relying on graphical integrated development environments (IDEs), highlevel compilers, smart static analysis tools and runtime systems that optimize, manage and assist the program development tasks. Domain-specific languages (DSLs) can bridge this productivity gap by providing higher levels of abstraction in environments close to the domain of application expert. DSLs carefully limit the set of programming constructs to minimize programmer mistakes while also enabling a rich set of domain-specific optimizations and program transformations. With a large number of DSLs to choose from, an inexperienced FPGA user may be confused about how to select an appropriate one for the intended domain. In this paper, we review a combination of legacy and state-ofthe-art DSLs available for FPGA development and provide a taxonomy and classification to guide selection and correct use of the framework."
A Survey of FPGA Based Neural Network Accelerator - https://hgpu.org/?p=17900
Recent researches on neural network have shown great advantage in computer vision over traditional algorithms based on handcrafted features and models. Neural network is now widely adopted in regions like image, speech and video recognition. But the great computation and storage complexity of neural network based algorithms poses great difficulty on its application. CPU platforms are hard to offer enough computation capacity. GPU platforms are the first choice for neural network process because of its high computation capacity and easy to use development frameworks. On the other hand, FPGA based neural network accelerator is becoming a research topic. Because specific designed hardware is the next possible solution to surpass GPU in speed and energy efficiency. Various FPGA based accelerator designs have been proposed with software and hardware optimization techniques to achieve high speed and energy efficiency. In this paper, we give an overview of previous work on neural network accelerators based on FPGA and summarize the main techniques used. Investigation from software to hardware, from circuit level to system level is carried out to complete analysis of FPGA based neural network accelerator design and serves as a guide to future work.
Bibliography of GPU Overviews
A Survey on Parallel Computing and its Applications in Data-Parallel Problems Using GPU Architectures - https://www.cambridge.org/core/journals/communications-in-computational-physics/article/survey-on-parallel-computing-and-its-applications-in-dataparallel-problems-using-gpu-architectures/879D964A36478175DEED99FB00C8D811
Scientific Computing Using Consumer Video-Gaming Hardware Devices - http://hgpu.org/?p=16277
GPU-accelerated algorithms for many-particle continuous-time quantum walks - http://www.sciencedirect.com/science/article/pii/S0010465517300668
Parallel computing has become an important subject in the field of computer science and has proven to be critical when researching high performance solutions. The evolution of computer architectures (multi-core and many-core) towards a higher number of cores can only confirm that parallelism is the method of choice for speeding up an algorithm. In the last decade, the graphics processing unit, or GPU, has gained an important place in the field of high performance computing (HPC) because of its low cost and massive parallel processing power. Super-computing has become, for the first time, available to anyone at the price of a desktop computer. In this paper, we survey the concept of parallel computing and especially GPU computing. Achieving efficient parallel algorithms for the GPU is not a trivial task, there are several technical restrictions that must be satisfied in order to achieve the expected performance. Some of these limitations are consequences of the underlying architecture of the GPU and the theoretical models behind it. Our goal is to present a set of theoretical and technical concepts that are often required to understand the GPU and its massive parallelism model. In particular, we show how this new technology can help the field of computational physics, especially when the problem is data-parallel. We present four examples of computational physics problems; n-body, collision detection, Potts model and cellular automata simulations. These examples well represent the kind of problems that are suitable for GPU computing. By understanding the GPU architecture and its massive parallelism programming model, one can overcome many of the technical limitations found along the way, design better GPU-based algorithms for computational physics problems and achieve speedups that can reach up to two orders of magnitude when compared to sequential implementations.
Scientific Computing Using Consumer Video-Gaming Hardware Devices - http://hgpu.org/?p=16277
"Commodity video-gaming hardware (consoles, graphics cards, tablets, etc.) performance has been advancing at a rapid pace owing to strong consumer demand and stiff market competition. Gaming hardware devices are currently amongst the most powerful and cost-effective computational technologies available in quantity. In this article, we evaluate a sample of current generation video-gaming hardware devices for scientific computing and compare their performance with specialized supercomputing general purpose graphics processing units (GPGPUs). We use the OpenCL SHOC benchmark suite, which is a measure of the performance of compute hardware on various different scientific application kernels, and also a popular public distributed computing application, Einstein@Home in the field of gravitational physics for the purposes of this evaluation."
GPU-accelerated algorithms for many-particle continuous-time quantum walks - http://www.sciencedirect.com/science/article/pii/S0010465517300668
"On the other hand, the evolution of computer architectures towards multicore processors even in stand-alone workstations enabled important cuts of the execution time by introducing the possibility of running multiple threads in parallel and spreading the workload among cores. This possibility was boosted up by the general purpose parallel computing architectures of modern graphic cards (GPGPUs). In the latter, hundreds or thousands of computational cores in the same single chip are able to process simultaneously a very large number of data. It should also be noted that an impressive computational power is present not only in dedicated GPUs for high-performance computing, but also in commodity graphic cards, which make modern workstations suitable for numerical analyses. In order to exploit such a huge computational power, algorithms must be first redesigned and adapted to the SIMT (Single Instruction Multiple Thread) and SIMD (Single Instruction Multiple Data) paradigms and translated then into programming languages with hardware-specific subsets of instructions. Among them, one of the most diffuse is CUDA-C, a C extension for the Compute Unified Device Architecture (CUDA) that represents the core component of NVIDIA GPUs. As a matter of fact, the use of GPUs for scientific analysis, which dates back to mid and late 2000s [31]; [32]; [33]; [34] ; [35], dramatically boosted with a two-digit yearly increasing rate since 2010. Just looking at the computational physics realm, several GPU-specific algorithms have been proposed in the last three years, e.g., for stochastic differential equations [36], molecular dynamics simulations [37] ; [38], fluid dynamics [39] ; [40], Metropolis Monte Carlo [41] simulations, quantum Monte Carlo simulations [42], and free-energy calculations [43]."
THOR
"We have designed and developed, from scratch, a global circulation model named THOR that solves the three-dimensional non-hydrostatic Euler equations. Our general approach lifts the commonly used assumptions of a shallow atmosphere and hydrostatic equilibrium. We solve the "pole problem" (where converging meridians on a sphere lead to increasingly smaller time steps near the poles) by implementing an icosahedral grid. Irregularities in the grid, which lead to grid imprinting, are smoothed using the "spring dynamics" technique. We validate our implementation of spring dynamics by examining calculations of the divergence and gradient of test functions. To prevent the computational time step from being bottlenecked by having to resolve sound waves, we implement a split-explicit method together with a horizontally explicit and vertically implicit integration. We validate our global circulation model by reproducing the Earth and also the hot Jupiter-like benchmark tests. THOR was designed to run on Graphics Processing Units (GPUs), which allows for physics modules (radiative transfer, clouds, chemistry) to be added in the future, and is part of the open-source Exoclimes Simulation Platform."
THOR: A New and Flexible Global Circulation Model to Explore Planetary Atmospheres - http://hgpu.org/?p=16280
Exoclimes Simulation Platform - http://www.exoclime.net/
The Exoclimes Simulation Platform (ESP) was born from a necessity to move beyond Earth-centric approaches to understanding atmospheres. Our dream and vision is to provide the exoplanet community with an open-source, freely-available, ultra-fast and cutting-edge set of simulational tools for studying exoplanetary atmospheres. The ESP harnesses the power of GPUs (graphic processing units), found in most Macs nowadays, to produce speed-ups at the order-of-magnitude level. These speed-ups are invested in building intuition and studying how atmospheric dynamics, chemistry and radiation interact in various ways.
HELIOS - GPU-Accelerated Radiative Transfer Code For Exoplanetary Atmospheres - https://github.com/exoclime/HELIOS
VULCAN - Atmospheric Chemistry - https://github.com/exoclime/VULCAN
Exoclimes Simulation Platform - http://www.exoclime.net/
The Exoclimes Simulation Platform (ESP) was born from a necessity to move beyond Earth-centric approaches to understanding atmospheres. Our dream and vision is to provide the exoplanet community with an open-source, freely-available, ultra-fast and cutting-edge set of simulational tools for studying exoplanetary atmospheres. The ESP harnesses the power of GPUs (graphic processing units), found in most Macs nowadays, to produce speed-ups at the order-of-magnitude level. These speed-ups are invested in building intuition and studying how atmospheric dynamics, chemistry and radiation interact in various ways.
HELIOS - GPU-Accelerated Radiative Transfer Code For Exoplanetary Atmospheres - https://github.com/exoclime/HELIOS
VULCAN - Atmospheric Chemistry - https://github.com/exoclime/VULCAN
Monday, September 12, 2016
Seaboard
"Seaboards
are single ‘dashboard’ visualizations of the real time and forecast
ocean data currently provided by SOCIB, from different coastal and ocean
monitoring locations around the Balearic Islands. A specific set of
Seaboards has been designed for the tourist sector and these are now
installed in several collaborating hotels, providing useful…
http://seaboard.socib.es."
https://github.com/socib/seaboard
http://apps.socib.es/
https://github.com/socib/seaboard
http://apps.socib.es/
GI-cat
"GI-cat features caching and mediation capabilities and can act as a
broker towards disparate catalog and access services: by implementing
metadata harmonization and protocol adaptation, it is able to transform
query results to a uniform and consistent interface. GI-cat is based on a
service-oriented framework of modular components and can be customized
and tailored to support different deployment scenarios.
GI-cat can access a multiplicity of catalogs services, as well as inventory and access services to discover, and possibly access, heterogeneous ESS resources. Specific components implement mediation services for interfacing heterogeneous service providers which expose multiple standard specifications; they are called Accessors. These mediating components map the heterogeneous providers metadata models into a uniform data model which implements ISO 19115, based on official ISO 19139 schemas and its extensions Accessors also implement the query protocol mapping; they translate the query requests expressed according to the interface protocols exposed by GI-cat, into the multiple query dialects spoken by the resource service providers. Currently, a number of well-accepted catalog and inventory services are supported, including several OGC Web Services (e.g. WCS, WMS), THREDDS Data Server, SeaDataNet Common Data Index, and GBIF. A list of test endpoints is here available.
The supported sources are:
http://essi-lab.eu/do/view/GIcat
http://essi-lab.eu/do/view/GIcat/GIcatDocumentation
How to Configure GI-cat for the First Time - https://www.youtube.com/watch?v=28biJHTQSrM
http://bcube.geodab.eu/bcube-broker/
GI-go GeoBrowser - http://essi-lab.eu/do/view/GIgo/WebHome
http://www.earthcube.org/workspace/bcube/brokering-accessor-hack-thon
GI-cat can access a multiplicity of catalogs services, as well as inventory and access services to discover, and possibly access, heterogeneous ESS resources. Specific components implement mediation services for interfacing heterogeneous service providers which expose multiple standard specifications; they are called Accessors. These mediating components map the heterogeneous providers metadata models into a uniform data model which implements ISO 19115, based on official ISO 19139 schemas and its extensions Accessors also implement the query protocol mapping; they translate the query requests expressed according to the interface protocols exposed by GI-cat, into the multiple query dialects spoken by the resource service providers. Currently, a number of well-accepted catalog and inventory services are supported, including several OGC Web Services (e.g. WCS, WMS), THREDDS Data Server, SeaDataNet Common Data Index, and GBIF. A list of test endpoints is here available.
The supported sources are:
-
OGC WCS 1.0, 1.1, 1.1.2 Specification Mapping details Configuration guide
-
OGC WMS 1.3.0, 1.1.1 Specification Configuration guide
-
OGC WFS 1.0.0 Specification Configuration guide
-
OGC WPS 1.0.0 Specification
- OGC SOS 1.0.0 Specification
-
OGC CSW 2.0.2 Core,
AP ISO 1.0,
ebRIM/CIM,
ebRIM/EO, CWIC Specification Configuration guide
-
FLICKR Specification
-
HDF Specification
-
HMA CSW 2.0.2 ebRIM/CIM Test instance
-
GeoNetwork (versions 2.2.0 and 2.4.1) catalog service
-
Deegree (version 2.2) catalog service
-
ESRI ArcGIS Geoportal (version 10) catalog service Specification Configuration guide
-
WAF Web Accessible Folders 1.0 Specification
-
FTP - File Transfer Protocol services populated with supported metadata
-
THREDDS 1.0.1, 1.0.2 Specification Mapping details Configuration guide
-
THREDDS-NCISO 1.0.1, 1.0.2 Specification Mapping details
-
THREDDS-NCISO-PLUS 1.0.1, 1.0.2 Specification Mapping details
-
CDI 1.04, 1.3, 1.4 Mapping details 1.6 Mapping details
-
GI-cat 6.x, 7.x Specification
-
GBIF Specification Configuring GBIF
-
OpenSearch 1.1 accessor Specification
-
OAI-PMH 2.0 (support to ISO19139 and dublin core formats) Specification
-
NetCDF-CF 1.4 Specification Mapping details Configuration guide
-
NCML-CF Specification Mapping details Configuration guide
-
NCML-OD Specification
-
ISO19115-2
-
GeoRSS 2.0 RSS Specifcation GeoRSS Specification Mapping details
-
GDACS Homepage
-
DIF Specification Mapping
-
File system Configuration guide
-
SITAD (Sistema Informativo Territoriale Ambientale Diffuso) accessor
-
INPE Web interface Mapping details
-
HYDRO Specification Mapping details Example use case Implementation details
-
EGASKRO Specification
- RASAQM Specification
-
IRIS event Specification
-
IRIS station Specification
-
UNAVCO Specification
-
KISTERS Web - Environment of Canada Kisters Environment Canada
-
DCAT DCAT specification Guide to the DCAT accessor
-
CKAN CKAN homepage Guide to the CKAN accessor
-
HYRAX THREDDS SERVER 1.9 HYRAX homepage Configuring HYRAX
- EML 2.1.1 EML metadata language specification
- ARPA DB (based on Microsoft SQL)
-
BCODMO Homepage
-
Environment Canada Hydrometric data (FTP) Reference endpoint
-
SHAPE files (FTP) ESRI Shapefile Technical Description
- Earth Engine
-
ESRI Map Server
-
FedEO FedEO Clearinghouse Homepage
- GrADS-DS GrADS Data Server Homepage
- IADC DB (MySQL)
-
NERRS National Estuarine Research Reserve System
- OGC WMTS Web Map Tile Service Documentation
http://essi-lab.eu/do/view/GIcat
http://essi-lab.eu/do/view/GIcat/GIcatDocumentation
How to Configure GI-cat for the First Time - https://www.youtube.com/watch?v=28biJHTQSrM
http://bcube.geodab.eu/bcube-broker/
GI-go GeoBrowser - http://essi-lab.eu/do/view/GIgo/WebHome
http://www.earthcube.org/workspace/bcube/brokering-accessor-hack-thon
AODN Open Geospatial Portal
"The AODN open geospatial portal is a Grails application for discovering, subsetting, and downloading geospatial data. The application is a stateless front end to other servers: GeoNetwork metadata catalog, GeoServer data server (WMS and WFS), ncWMS web map server, and GoGoDuck netCDF subsetting and aggregation service."
https://github.com/aodn/aodn-portal
https://github.com/aodn/aodn-portal
RAMADDA
"RAMADDA is a content repository and publishing platform with a focus on science data."
https://sourceforge.net/projects/ramadda/
RAMADDA on Docker - https://github.com/Unidata/ramadda-docker
Docker Unidata/RAMADDA - https://hub.docker.com/r/unidata/ramadda/
https://github.com/ScottWales/ramadda
https://github.com/Unidata/tomcat-docker
https://sourceforge.net/projects/ramadda/
RAMADDA on Docker - https://github.com/Unidata/ramadda-docker
Docker Unidata/RAMADDA - https://hub.docker.com/r/unidata/ramadda/
https://github.com/ScottWales/ramadda
https://github.com/Unidata/tomcat-docker
Siphon
"Siphon is a collection of Python utilities for downloading data from Unidata
data technologies. Siphon’s current functionality focuses on access to data hosted on a
THREDDS Data Server."
http://siphon.readthedocs.io/en/latest/
https://github.com/Unidata/siphon
http://siphon.readthedocs.io/en/latest/
https://github.com/Unidata/siphon
Stetl
"Stetl, Streaming ETL, is an open source (GNU GPL) toolkit for the transformation (ETL)
of geospatial data. Stetl is based on existing ETL tools like GDAL/OGR and
XSLT. Stetl processing is driven from a configuration (.ini) file.
Stetl is written in Python and in particular suited for processing GML.
Stetl basically glues together existing parsing and transformation tools like GDAL/OGR (ogr2ogr) and XSLT. By using native tools like libxml2 and libxslt (via Python lxml) Stetl is speed-optimized.
The core concepts of Stetl remain pretty simple: an input resource like a file or a database table is mapped to an output resource (also a file, a database, etc) via one or more filters. The input, filters and output are connected in a pipeline called a processing chain or Chain. This is a bit similar to a current in electrical engineering: an input flows through several filters, that each modify the current."
http://www.stetl.org/en/latest/
https://github.com/geopython/stetl
Stetl basically glues together existing parsing and transformation tools like GDAL/OGR (ogr2ogr) and XSLT. By using native tools like libxml2 and libxslt (via Python lxml) Stetl is speed-optimized.
The core concepts of Stetl remain pretty simple: an input resource like a file or a database table is mapped to an output resource (also a file, a database, etc) via one or more filters. The input, filters and output are connected in a pipeline called a processing chain or Chain. This is a bit similar to a current in electrical engineering: an input flows through several filters, that each modify the current."
http://www.stetl.org/en/latest/
https://github.com/geopython/stetl
UV-CDAT
"UV-CDAT is a powerful and complete
front-end to a rich set of visual-data exploration and analysis
capabilities well suited for climate-data analysis problems.
UV-CDAT builds on the following key technologies:
https://github.com/UV-CDAT/uvcdat/wiki
https://uvcdat.llnl.gov/index.html
Installation
UV-CDAT builds on the following key technologies:
- The Climate Data Analysis Tools (CDAT) framework developed at LLNL for the analysis, visualization, and management of large-scale distributed climate data;
- ParaView: an open-source, multi-platform, parallel-capable visualization tool with recently added capabilities to better support specific needs of the climate-science community;
- VisTrails, an open-source scientific workflow and provenance management system that supports data exploration and visualization;
- VisIt: an open-source, parallel-capable, visual-data exploration and analysis tool that is capable of running on a diverse set of platforms, ranging from laptops to the Department of Energy's largest supercomputers.
- Tightly coupled integration of the CDAT Core with the VTK/ParaView infrastructure to provide high-performance, parallel-streaming data analysis and visualization of massive climate-data sets (other tighly coupled tools include VCS, VisTrails, DV3D, and ESMF/ESMP);
- Loosely coupled integration to provide the flexibility of using tools quickly in the infrastructure such as ViSUS, VisIt, R, and MatLab for data analysis and visualization as well as to apply customized data analysis applications within an integrated environment.
https://github.com/UV-CDAT/uvcdat/wiki
https://uvcdat.llnl.gov/index.html
Installation
conda create -n uvcdat -c uvcdat uvcdat hdf5=1.8.16 pyqt=4.11.3
source activate uvcdat
source deactivate uvcdat
GHCNpy
"The demand for weather, water, and climate information has been high,
with an expectation of long, serially complete observational records in
order to assess historical and current events in the Earth's system.
While assessments have been championed through monthly and annual State
of the Climate Reports produced at the National Centers for
Environmental Information (NCEI, formerly NCDC), there is a demand for
near-real time information that will address the needs of the
atmospheric science community. The Global Historical Climatology Network
– Daily data set (GHCN-D) provides a strong foundation of the Earth's
climate on the daily scale, and is the official archive of daily data in
the United States. The data set is updated nightly, with new data
ingested with a lag of approximately one day. The data set adheres to a
strict set of quality assurance, and lays the foundation for other
products, including the 1981-2010 US Normals.
While a very popular data set, GHCN-Daily is only available in ASCII text or comma separated files, and very little visualization is provided to the end user. It makes sense then to build a suite of algorithms that will not only take advantage of its spatial and temporal completeness, but also help end users analyze this data in a simple, efficient manner. To that end, a Python package has been developed called GHCNPy to address these needs. Open sourced, GHCNPy uses basic packages such as Numpy, Scipy, and matplotlib to perform a variety of tasks. Routines include converting the data to CF compliant netCDF files, time series analysis, and visualization of data, from the station to global scale."
https://github.com/jjrennie/GHCNpy
https://ams.confex.com/ams/96Annual/webprogram/Paper283618.html
While a very popular data set, GHCN-Daily is only available in ASCII text or comma separated files, and very little visualization is provided to the end user. It makes sense then to build a suite of algorithms that will not only take advantage of its spatial and temporal completeness, but also help end users analyze this data in a simple, efficient manner. To that end, a Python package has been developed called GHCNPy to address these needs. Open sourced, GHCNPy uses basic packages such as Numpy, Scipy, and matplotlib to perform a variety of tasks. Routines include converting the data to CF compliant netCDF files, time series analysis, and visualization of data, from the station to global scale."
https://github.com/jjrennie/GHCNpy
https://ams.confex.com/ams/96Annual/webprogram/Paper283618.html
PyFerret
"In simplest terms, PyFerret is Ferret encapsulated in Python.
PyFerret is a Python module wrapping Ferret. The pyferret module provides Python functions so Python users can easily take advantage of the Ferret's abilities to retrieve, manipulate, visualize, and save data. There are also functions to move data between Python and the Ferret engine. Python scripts used as Ferret external functions.
But PyFerret can also be used as a transparent replacement for the traditional Ferret executable. A simple script starts Python and enters the pyferret module, giving the traditional Ferret interface. This script also support all of Ferret's command-line options.
Inside the PyFerret wrapper is a complete, but enhanced Ferret engine. One very noticeable enhancement is improved graphics which can be saved in common image formats. (Sorry, no more GKS metafiles.) Also, PyFerret comes packaged with many new statistical and shapefile function which are, in fact, Python scripts making use of third-party Python modules."
http://ferret.pmel.noaa.gov/Ferret/documentation/pyferret/
https://github.com/NOAA-PMEL/PyFerret
Installation
A PyFerret environment can be installed using conda.
PyFerret is a Python module wrapping Ferret. The pyferret module provides Python functions so Python users can easily take advantage of the Ferret's abilities to retrieve, manipulate, visualize, and save data. There are also functions to move data between Python and the Ferret engine. Python scripts used as Ferret external functions.
But PyFerret can also be used as a transparent replacement for the traditional Ferret executable. A simple script starts Python and enters the pyferret module, giving the traditional Ferret interface. This script also support all of Ferret's command-line options.
Inside the PyFerret wrapper is a complete, but enhanced Ferret engine. One very noticeable enhancement is improved graphics which can be saved in common image formats. (Sorry, no more GKS metafiles.) Also, PyFerret comes packaged with many new statistical and shapefile function which are, in fact, Python scripts making use of third-party Python modules."
http://ferret.pmel.noaa.gov/Ferret/documentation/pyferret/
https://github.com/NOAA-PMEL/PyFerret
Installation
A PyFerret environment can be installed using conda.
conda create -n FERRET -c conda-forge pyferret --yes
Enter the environment via:
source activate FERRET
Exit the environment via:
source deactivate FERRET
Suricata
"Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF and its supporting vendors."
https://suricata-ids.org/
How to Install Suricata on a Linux Box in 5 Minutes - https://danielmiessler.com/blog/how-to-install-suricata-on-a-linux-box-in-5-minutes/
https://suricata-ids.org/
How to Install Suricata on a Linux Box in 5 Minutes - https://danielmiessler.com/blog/how-to-install-suricata-on-a-linux-box-in-5-minutes/
Docker
"Docker is an open-source project that automates the deployment of Linux applications inside software containers. Docker containers wrap up a piece of software in a complete filesystem
that contains everything it needs to run: code, runtime, system tools,
system libraries – anything you can install on a server. This guarantees
that it will always run the same, regardless of the environment it is
running in."
https://www.docker.com/
https://github.com/docker/docker
https://en.wikipedia.org/wiki/Docker_(software)
https://github.com/bcicen/awesome-docker
Articles
Whales on a Place: Deploying Software to NSF/NCAR Research Aircraft W/ Docker - https://sea.ucar.edu/event/whales-plane-deploying-software-nsf-ncar-research-aircraft-w-docker
Container Computing for Scientific Workflows - https://github.com/NERSC/2016-11-14-sc16-Container-Tutorial
Competition
Moving from Docker to rkt - https://medium.com/@adriaandejonge/moving-from-docker-to-rkt-310dc9aec938
Docker Compose
A tool for defining and running multi-container Docker applications.
https://docs.docker.com/compose/
FUN IMAGES
einsteintoolkit - https://github.com/eschnett/einsteintoolkit-docker
https://www.docker.com/
https://github.com/docker/docker
https://en.wikipedia.org/wiki/Docker_(software)
https://github.com/bcicen/awesome-docker
Articles
Whales on a Place: Deploying Software to NSF/NCAR Research Aircraft W/ Docker - https://sea.ucar.edu/event/whales-plane-deploying-software-nsf-ncar-research-aircraft-w-docker
Container Computing for Scientific Workflows - https://github.com/NERSC/2016-11-14-sc16-Container-Tutorial
Competition
Moving from Docker to rkt - https://medium.com/@adriaandejonge/moving-from-docker-to-rkt-310dc9aec938
Docker Compose
A tool for defining and running multi-container Docker applications.
https://docs.docker.com/compose/
FUN IMAGES
einsteintoolkit - https://github.com/eschnett/einsteintoolkit-docker
National Land Cover Database (NLCD)
http://www.mrlc.gov/nlcd2011.php
Completion of the 2011 National Land Cover Database - http://www.asprs.org/a/publications/pers/2015journals/PERS_May_2015/HTML/files/assets/basic-html/index.html#345
Distributed Oceanographic Match-Up Service (DOMS)
"The Distributed Oceanographic Match-up Service (DOMS) is a
web-accessible service tool that will reconcile satellite and in situ
datasets in support of NASA’s Earth Science mission. The service will
provide a mechanism for users to input a series of geospatial references
for satellite observations (e.g., footprint location, date, and time)
and receive the in-situ observations that are “matched” to the satellite
data within a selectable temporal and spatial domain. The inverse of
inputting in-situ geospatial data (e.g., positions of moorings, floats,
or ships) and returning corresponding satellite observations will also
be supported. The DOMS prototype will include several characteristic
in-situ and satellite observation datasets. For the in-situ data, the
focus will be surface marine observations from the International
Comprehensive Ocean-Atmosphere Data Set (ICOADS), the Shipboard
Automated Meteorological and Oceanographic System Initiative (SAMOS),
and the Salinity Processes in the Upper Ocean Regional Study (SPURS).
Satellite products will include JPL ASCAT winds, Aquarius orbital/swath
dataset, MODIS SST, and the high-resolution gridded MUR-SST product.
Importantly, although DOMS will be established with these selected
datasets, it will be readily extendable to other in situ and satellite
collections, which could support additional science disciplines."
https://mdc.coaps.fsu.edu/doms
https://sea.ucar.edu/event/building-distributed-oceanography-match-service-doms-pair-field-observation-and-satellite-data
https://mdc.coaps.fsu.edu/doms
https://sea.ucar.edu/event/building-distributed-oceanography-match-service-doms-pair-field-observation-and-satellite-data
GTX1070 Linux installation and mining clue
"As you might have read i got 18 GTX1070 and posted some benchmark information earlier (http://forum.ethereum.org/discussion/comment/42663). @vaulter asked to some some details (@vaulter
perhaps you can add your GTX1080 findings, settings?) In this topic i
want to place the more technical notes how you can get this working. The
short summery: Yes i managed to get 6x GTX1070 running at 218.11MH/s
with heavy tuning / overclocking, but no idea how this would hold long
term. Currently i keep them at 192,88MH/s (x3 rigs) which seem to be the
'safe overclocking defaults' to me. Who knows how stuff progresses with
updates from @Genoil
and if its running under a windows driver stable and fast. Safe to say
with the lesser power consumption AND more MH/s then a card like R9 390X
this GTX1070 with its price is a very nice card to have (especially if
you run apps that only run good on Nvidia cards)
It took me quite a while to get it working and this document only contains 5% of my notes and stuff, its the minimum to get you started and you will need to do some tuning on your own to max your card out. Some stuff aren't as good as i like yet (e.g. headless VNC access without the use of a monitor) but it works and more importantly its stable. Thanks go out to @Genoil for his clue and his work on ethminer. This document is not entirely ment as a walk-through as some knowledge on mining, linux overclocking and common sense is still required ... So here goes."
https://forum.ethereum.org/discussion/7780/gtx1070-linux-installation-and-mining-clue-goodbye-amd-welcome-nvidia-for-miners
It took me quite a while to get it working and this document only contains 5% of my notes and stuff, its the minimum to get you started and you will need to do some tuning on your own to max your card out. Some stuff aren't as good as i like yet (e.g. headless VNC access without the use of a monitor) but it works and more importantly its stable. Thanks go out to @Genoil for his clue and his work on ethminer. This document is not entirely ment as a walk-through as some knowledge on mining, linux overclocking and common sense is still required ... So here goes."
https://forum.ethereum.org/discussion/7780/gtx1070-linux-installation-and-mining-clue-goodbye-amd-welcome-nvidia-for-miners
NVIDIA GeForce GTX 1070 On Linux: Testing With OpenGL, OpenCL, CUDA & Vulkan
14 June 2016
"NVIDIA sent over a GeForce GTX 1070 and I've been putting it through its paces under Linux with a variety of OpenGL, OpenCL, and Vulkan benchmarks along with CUDA and deep learning benchmarks. Here's the first look at the GeForce GTX 1070 performance under Ubuntu Linux.
In conjunction with the NVIDIA 367 proprietary driver on Linux, the GeForce GTX 1070 ran into no difficulties running under Ubuntu Linux throughout my initial testing. As noted in my GTX 1080 Linux review, there wasn't any overclocking support available when enabling the CoolBits options and this is a similar limitation with the GTX 1070 (yesterday NVIDIA did release a new 367 Linux driver that I have yet to test but its official change-log at least didn't make note of any overclocking additions).
Since my earlier GTX 1080 review, nothing has changed with regards to the open-source driver support. I have yet to see any experimental patches published for at least kernel mode-setting in Nouveau while any accelerated support for Pascal will not happen until NVIDIA is able to release the signed firmware binary images for usage by the Nouveau driver. I haven't received any word from NVIDIA Corp yet when that Pascal firmware availability is expected, but at least the proprietary driver support is in good shape.
...
Well, that's all the initial data I have to share on the GeForce GTX 1070 after hammering it under Linux the past 24 hours. The GeForce GTX 1070 is a very nice upgrade over the GeForce GTX 900 series and especially if you are still using a Kepler graphics card or later. In many of our Linux benchmarks, the GeForce GTX 1070 was around 60% faster than the GTX 970! The GTX 1070 was commonly beating the GTX 980 Ti and GTX TITAN X while the GeForce GTX 1080 still delivers the maximum performance possible for a desktop graphics card at this time. The GTX 1070 (and GTX 1080) aren't only stunning for their raw performance but the power efficiency is also a significant push forward. Particularly when the GeForce GTX 1070 AIB cards begin appearing in the coming weeks at $399, the GeForce GTX 1070 should be a very nice option for Linux gamers looking to get the maximum performance for 1440p or 4K gaming. It will be fun to see later this month how the Radeon RX 480 compares, but considering the state of the Radeon Linux drivers, chances are you'll want to stick to the green side for the best Linux gaming experience unless you are a devout user of open-source drivers."
http://www.phoronix.com/scan.php?page=article&item=nvidia-gtx-1070&num=1
http://www.nvidia.com/download/driverresults.aspx/104284/en-us
"NVIDIA sent over a GeForce GTX 1070 and I've been putting it through its paces under Linux with a variety of OpenGL, OpenCL, and Vulkan benchmarks along with CUDA and deep learning benchmarks. Here's the first look at the GeForce GTX 1070 performance under Ubuntu Linux.
In conjunction with the NVIDIA 367 proprietary driver on Linux, the GeForce GTX 1070 ran into no difficulties running under Ubuntu Linux throughout my initial testing. As noted in my GTX 1080 Linux review, there wasn't any overclocking support available when enabling the CoolBits options and this is a similar limitation with the GTX 1070 (yesterday NVIDIA did release a new 367 Linux driver that I have yet to test but its official change-log at least didn't make note of any overclocking additions).
Since my earlier GTX 1080 review, nothing has changed with regards to the open-source driver support. I have yet to see any experimental patches published for at least kernel mode-setting in Nouveau while any accelerated support for Pascal will not happen until NVIDIA is able to release the signed firmware binary images for usage by the Nouveau driver. I haven't received any word from NVIDIA Corp yet when that Pascal firmware availability is expected, but at least the proprietary driver support is in good shape.
...
Well, that's all the initial data I have to share on the GeForce GTX 1070 after hammering it under Linux the past 24 hours. The GeForce GTX 1070 is a very nice upgrade over the GeForce GTX 900 series and especially if you are still using a Kepler graphics card or later. In many of our Linux benchmarks, the GeForce GTX 1070 was around 60% faster than the GTX 970! The GTX 1070 was commonly beating the GTX 980 Ti and GTX TITAN X while the GeForce GTX 1080 still delivers the maximum performance possible for a desktop graphics card at this time. The GTX 1070 (and GTX 1080) aren't only stunning for their raw performance but the power efficiency is also a significant push forward. Particularly when the GeForce GTX 1070 AIB cards begin appearing in the coming weeks at $399, the GeForce GTX 1070 should be a very nice option for Linux gamers looking to get the maximum performance for 1440p or 4K gaming. It will be fun to see later this month how the Radeon RX 480 compares, but considering the state of the Radeon Linux drivers, chances are you'll want to stick to the green side for the best Linux gaming experience unless you are a devout user of open-source drivers."
http://www.phoronix.com/scan.php?page=article&item=nvidia-gtx-1070&num=1
http://www.nvidia.com/download/driverresults.aspx/104284/en-us
Sunday, September 11, 2016
pymp
"This package brings OpenMP-like functionality to Python. It takes the good qualities of OpenMP such as minimal code changes and high efficiency and combines them with the Python Zen of code clarity and ease-of-use."
MetaMoprh
"A library framework designed to (automatically) extract as much computational capability as possible from HPC systems. Its design centers around three core principles: abstraction, interoperability, and adaptivity.
We realize MetaMorph2 as a layered library of libraries. Each tier implements one of the core principles of abstraction, interoperability, and adaptivity. The top-level user APIs and platform-specific back-ends exists as separate shared library objects, with interfaces designated in shared header files. Primarily, this encapsulation supports custom tuning of backends to a specific device or class of devices. In addition, it allows back-ends to be separately used, distributed, compiled, or even completely rewritten, without interference with the other components.
The core API, library infrastructure and communication interface are written in standard C for portability and performance. Individual accelerator back-ends are generated in C with OpenMP and optional SIMD extensions (for CPU and Intel MIC), CUDA C/C++ (NVIDIA GPUs), and C++ with OpenCL (AMD GPUs/APUs and other devices). In addition, a wrapper around the top-level API is written in polymorphic Fortran 2003 to simplify interoperability with Fortran applications prevalent in some fields of scientific computing."
http://synergy.cs.vt.edu/
https://github.com/vtsynergy/MetaMorph
MetaMorph: A Library Framework for Interoperable Kernels on Multi- and Many-core Clusters - http://hgpu.org/?p=16446
We realize MetaMorph2 as a layered library of libraries. Each tier implements one of the core principles of abstraction, interoperability, and adaptivity. The top-level user APIs and platform-specific back-ends exists as separate shared library objects, with interfaces designated in shared header files. Primarily, this encapsulation supports custom tuning of backends to a specific device or class of devices. In addition, it allows back-ends to be separately used, distributed, compiled, or even completely rewritten, without interference with the other components.
The core API, library infrastructure and communication interface are written in standard C for portability and performance. Individual accelerator back-ends are generated in C with OpenMP and optional SIMD extensions (for CPU and Intel MIC), CUDA C/C++ (NVIDIA GPUs), and C++ with OpenCL (AMD GPUs/APUs and other devices). In addition, a wrapper around the top-level API is written in polymorphic Fortran 2003 to simplify interoperability with Fortran applications prevalent in some fields of scientific computing."
http://synergy.cs.vt.edu/
https://github.com/vtsynergy/MetaMorph
MetaMorph: A Library Framework for Interoperable Kernels on Multi- and Many-core Clusters - http://hgpu.org/?p=16446
Software Heritage
"Our ambition is to collect, preserve, and share all software that is publicly available in source code form. On this foundation, a wealth of applications can be built, ranging from cultural heritage to industry and research.
Software is an essential part of our lives. Given that any software component may turn out to be essential in the future, we do not make distinctions and collect all software that is publicly available in source code form.
We recognize that there is significant value in selecting among all this software some collections of particular interest, and we will encourage the construction of curated archives on top of Software Heritage.
We keep track of the origin of software we archive and store its full development history: this precious meta-information will be carefully harvested and structured for future use."
https://www.softwareheritage.org/
Software is an essential part of our lives. Given that any software component may turn out to be essential in the future, we do not make distinctions and collect all software that is publicly available in source code form.
We recognize that there is significant value in selecting among all this software some collections of particular interest, and we will encourage the construction of curated archives on top of Software Heritage.
We keep track of the origin of software we archive and store its full development history: this precious meta-information will be carefully harvested and structured for future use."
https://www.softwareheritage.org/
Gaalop
"Gaalop (Geometic Algebra Algorithms Optimizer) is a software to optimize geometric algebra files.
Algorithms can be developed by using the freely available CLUCalc software by Christian Perwass. Gaalop optimizes the algorithm and produces C++ (AMP), OpenCL, CUDA, CLUCalc or LaTeX output (other output-formats will follow)."
http://www.gaalop.de/
https://github.com/CallForSanity/Gaalop
Gaalop – High Performance Parallel Computing based on Conformal Geometric Algebra - http://www.gaalop.de/wp-content/uploads/Gaalop-High-PerformanceComputing-based-onConformal-Geometric-Algebra.pdf
Geometric Algebra Enhanced Precompiler for C++, OpenCL and Mathematica’s OpenCLLink - http://hgpu.org/?p=12044
Algorithms can be developed by using the freely available CLUCalc software by Christian Perwass. Gaalop optimizes the algorithm and produces C++ (AMP), OpenCL, CUDA, CLUCalc or LaTeX output (other output-formats will follow)."
http://www.gaalop.de/
https://github.com/CallForSanity/Gaalop
Gaalop – High Performance Parallel Computing based on Conformal Geometric Algebra - http://www.gaalop.de/wp-content/uploads/Gaalop-High-PerformanceComputing-based-onConformal-Geometric-Algebra.pdf
Geometric Algebra Enhanced Precompiler for C++, OpenCL and Mathematica’s OpenCLLink - http://hgpu.org/?p=12044
Subscribe to:
Posts (Atom)