S(o)OS Deliverables

Print

 

A

  • ABI: Application Binary Interface
  • Allocation: Action of locating and locking an amount of memory of a processing unit for a specific thread or OS module
  • API: Application-Programming Interface
  • Application: The user's software that the user wants to run on top of S(o)OS

C

  • Code Adaptation: Process in which a segment (or set of segments) is modified and injected extra code.
  • Compiler / Compilation: the transformation of one form of code into another. Generally this follows a down-ward path in complexity, i.e. from higher-order languages into lower-order ones, ending with the object code [ WIKI ]. In the case of S(o)OS, the compilation process consists of two major steps:
    • the conversion of the source code into an intermediary annotated code that contains additional information about infrastructure requirements and dependencies (work- and dataflow graph)
    • the conversion of the prepared code into an executable form that may still carry object information for easy adaptation to similar platforms.
  • Component: See Module
  • Concurrency: two tasks are considered concurrent to one another if they can be executed in parallel. In the context of S(o)OS we denote the degree of dependency between two tasks as their degree of concurrency. In other words, tasks with no dependencies exhibit the highest concurrency.
  • CPU: Central Processing Unit, see also Processor

D

  • Deploy: Allocate a thread and its context in the designated computing unit
  • DISTAE: DISTributed Automatic Execution
  • DHT: Distributed Hash Table

F

  • FS: File System
  • FPGA: Field-Programmable Gate Array 

G:

  • GPGPU: General-purpose computing on graphics processing units
  • GPGPU Computing: means of using a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU).[1]
  • GPU: Graphics Processing Unit 

 H

  • HAL: Hardware Abstraction Layer
  • HPC (High Performance Computing): this does not imply parallel or cluster machines per se, but mainly that maximum calculation performance is achieved through any possible means
  • HTM: Hardware-based Transactional Memory

I

  • IR / Intermediary Representation: is the language of an abstract machine designed to aid analysis and optimisations before final compilation into specific architecture binary code (object code).

M

  • Module: Each of the individual pieces of S(o)OS that can perform a OS task.
  • MPI: Message-Passing Interface

N

  • NFS: Network File System
  • NoC: Network-on-a-Chip
  • NUMA: Non-Uniform Memory Architecture 

O

  • Object Code: a sequence of instructions in a computer language in architecture specific code language.

P

  • Parallelised code: a program that spawns multiple threads in order to distribute the workload of a specific task over multiple processors. Generally this implies data distribution, which means that each thread fulfills the same task on different data. In some cases it can imply work distribution, in which case it relates to concurrency.
  • Processing Unit: the smallest processing entity in a processor - generally these are the cores in a multi- or many-core system, but with modern processor architectures, the boundaries of cores are not always clear
  • Processor: denotes generally the CPU (see there) of the system. Large scale environments are formed by integrating multiple processors into the system, so that we can no longer talk of a central processing unit, but of just processors. Typically in HPC (see there), multiple processors are co-located on one single board, thus forming a "node" (see there). In modern systems, processors generally incoporate multiple cores or processing units

Q

  • QoS: Quality of Service
  • QMS: Query Management Service 

R

  • RD: Resource Discovery
  • RDMPI: Resource Discovery Message Passing Interface
  • Resource: Each of the computing or memory  units available in the system
  • RR: Resource Requester
  • RP: Resource Provider 

S

  • SAM: Speculative Access to Memory
  • Service Orientation:
  • Supercomputer: a machine that is built with the most modern means to achieve high performance.

T

  • Thread: Each of the application's instruction sequences
  • TM: Transactional Memory

V

  • VFS: Virtual File System
  • Virtual Memory: generally the distinction between the physical (global) address space and the address space assigned for an individual process ("local" space). With processes being distributed across multiple processing units, the mapping from virtual to actual, physical space becomes more difficult, as each processing unit may host different part of the memory. Typically, this mapping is mostly executed by hardware. In S(o)OS, two approaches are discussed:
    (1) classical approach with virtual space mapping directly to physical space, but each unit having a different mapping table
    (2) completely virtualised approach, where the virtual address is a meaningless identifier to map to any physical address. This allows better distribution, but makes it difficult to identify overlaps, respectively to advance through memory.
    The realistic case will maintain a software memory mapping table with different endpoints for different regions. Overlaps can be explicitly noted.
 

 

Thursday the 17th. Sponsored under FP7-ICT-2009.8.1, Grant Agreement No. 248465. This website is monitored by Google Analytics. IP addresses are anonymized.
Copyright 2012

©