Commit 4e2f91de authored by Jens Korinth's avatar Jens Korinth
Browse files

Update README.md

* need to rewrite the GETTINGSTARTED documents, removed them for now
* also deleted 'release.sh' leftover
parent e51b951c
Tapasco -- Getting Started Part 2 (Zynq)
===================================================
This is the second part of the TPC tutorial, concerned with the Zynq platforms
only. In this part we will load the bitstream generated in Part 1 to the FPGA
and compile and execute the demo application on the board.
We will use a zedboard in the following, but the basic operation of the ZC706
is identical, so you can use it for the ZC706 as well.
Preparing the system
--------------------
By default, the TPC linux image has two users:
1. `root` (passwd: `root`)
2. `tapasco` (passwd: `tapascotapasco`)
Obviously this is an extremely insecure setup and should be changed immediately.
Login as `root`, then use the `passwd` program to change the root password.
Repeat for user `tapasco`.
The user `tapasco` is `sudoer`, i.e., you can use the `sudo` program to temporarily
gain root privileges. This is sufficient for TPC, but feel free to configure
the system in any way you like.
Preparing the TPC libaries and driver
-------------------------------------
The Tapasco software stack consists of three layers:
1. TPC(++) API (`libtapasco.so` / `libtapasco.a`)
2. Platform API (`libplatform.so` / `libplatform.a`)
3. Device Driver (`tapasco-platform-zynq.ko`)
When you are using TPC, you will only need to concern yourself with TPC API,
the other layers will be hidden from the application point of view.
Nevertheless, they need to be available to build and run the application.
To simplify the building of the libraries, there is a script in `$TAPASCO_HOME/bin`
called `tapasco-build-libs`. It will compile all three layers:
[tapasco@zed] ~ tapasco-build-libs
This will build the libraries for the zedboard in Release mode, you should see
several lines of status logs, e.g.:
Building release mode libraries, pass 'debug' as first argument to build debug libs...
KCPPFLAGS="-DNDEBUG -O3" make -C /home/tapasco/linux-xlnx M=/home/tapasco/tapasco/2016.03/platform/zynq/module modules
make[1]: Entering directory '/home/tapasco/linux-xlnx'
CC [M] /home/tapasco/tapasco/2016.03/platform/zynq/module/zynq_module.o
CC [M] /home/tapasco/tapasco/2016.03/platform/zynq/module/zynq_device.o
CC [M] /home/tapasco/tapasco/2016.03/platform/zynq/module/zynq_dmamgmt.o
CC [M] /home/tapasco/tapasco/2016.03/platform/zynq/module/zynq_irq.o
CC [M] /home/tapasco/tapasco/2016.03/platform/zynq/module/zynq_ioctl.o
LD [M] /home/tapasco/tapasco/2016.03/platform/zynq/module/tapasco-platform-zynq.o
Building modules, stage 2.
MODPOST 1 modules
CC /home/tapasco/tapasco/2016.03/platform/zynq/module/tapasco-platform-zynq.mod.o
LD [M] /home/tapasco/tapasco/2016.03/platform/zynq/module/tapasco-platform-zynq.ko
make[1]: Leaving directory '/home/tapasco/linux-xlnx'
...
TPC is now ready! By default, the script will build the libraries in release
mode, but you can switch to debug mode easily:
[tapasco@zed] ~ tapasco-build-libs --mode debug
Logging features are enabled in debug mode only, see the Debugging chapter at
the end of this document. See also `tapasco-build-libs --help` for more info.
Loading bitstreams
------------------
The next step is to copy the bitstreams (.bit files) we have prepared in Part 1
to the device. Once you have copied the .bit file to the board (e.g., via
`scp`), you need to load it to the FPGA, then load the driver.
For convenience, there is a script called `tapasco-load-bitstream` in `$TAPASCO_HOME/bin`
that simplifies this process, which can be called like this:
[tapasco@zed] ~ tapasco-load-bitstream --reload-driver <PATH TO .bit FILE>
It will ask for the `sudo` password of the user `tapasco` (loading the bitstream
and driver requires root privilege). On the zedboard there is a blue status LED
(left of the OLED display) that indicates whether or not a valid bitstream is
configured in the FPGA.
If everything goes well, you should see some log messages similar to this:
~/tapasco/2016.03/platform/zynq/module ~/tapasco/2016.03
[sudo] password for tapasco:
Loading bitstream /home/tapasco/basic_test.bd.bit ...
Done!
Loading kernel module ...
~/tapasco/2016.03
Done.
On the zedboard there is a bright blue LED (left of the OLED display) that will
turn on when a valid bitstream has been configured in the FPGA. After running
this script it should turn on.
**Warning:** Do not load the device driver unless a valid TPC bitstream is
loaded! The system will crash and require a cold reboot. Unfortunately, there is
no safe way to probe the hardware in the reconfigurable fabric; the CPU will
attempt to read in the memory section where the FPGA is mapped and cause a bus
stall if no device in the fabric answers.
Compiling TPC(++) API programs
------------------------------
Continuing the example from Part 1, we will now compile the Rot13 application
located in `$TAPASCO_HOME/kernel/rot13`. C/C++ builds in TPC use `cmake`, a
cross-platform Makefile generator (see [1]). The pattern you see below repeats
for all CMake projects:
[tapasco@zed] cd $TAPASCO_HOME/kernel/rot13 && mkdir -p build && cd build
[tapasco@zed] cmake -DCMAKE_BUILD_TYPE=Release .. && make
This will create a `build` subdirectory in which the `tapasco_rot13` application is
begin build. You can also compile in debug mode by using `cmake ..` instead.
-- The C compiler identification is GNU 5.3.0
-- The CXX compiler identification is GNU 5.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done
-- Generating done
-- Build files have been written to: /home/tapasco/tapasco/2016.03/kernel/rot13/build
Scanning dependencies of target tapasco-rot13
[ 25%] Building CXX object CMakeFiles/tapasco-rot13.dir/tapasco_rot13.cpp.o
[ 50%] Linking CXX executable tapasco-rot13
[ 50%] Built target tapasco-rot13
Scanning dependencies of target rot13
[ 75%] Building CXX object CMakeFiles/rot13.dir/rot13.cpp.o
[100%] Linking CXX executable rot13
[100%] Built target rot13
Now there should be a `tapasco-rot13` executable. As a first argument, pass a text
file to be ciphered; there is an ASCII version of the Shakespeare play
"All\'s well that ends well" in `~/allswell.txt`. Let us test the application
by enciphering it twice, this should give the original text back:
[tapasco@zed] ~/tapasco/2016.03/kernel/rot13 $ ./tapasco-rot13 ~/allswell.txt > test.txt
[tapasco@zed] ~/tapasco/2016.03/kernel/rot13 $ ./tapasco-rot13 test.txt
If everything goes well, the plain text should appear on the screen now.
**Congratulations!** This concludes the tutorial. We have seen how to build the
TPC libraries, load bitstream and driver and compile TPC API applications. Of
course this does not give a complete overview of TPC, but hopefully it provides
a solid starting point to start exploring. The Rot13 application is simple
enough to explore the basics; a next step could be the `basic_test` example in
`$TAPASCO_HOME/examples/basic_test`. There is a TPC configuration for three basic
testing kernels, which perform read, write and r+w accesses on main memory
respectively. Check out the kernels `arraysum`, `arrayinit` and `arrayupdate` in
`$TAPASCO_HOME/kernel` and try to run the example.
Debugging
---------
This document is over; everything runs perfectly fine, so why are you still
reading? ;-) Just joking! As the saying goes "Hardware is hard", and it still
takes a lot of time to get even a moderately complex application running on the
FPGA. On the way there will be problems, and since there are so many moving
parts in between the software application and the hardware in the fabric, we
need all the debugging help we can get. this section is concerned with some of
the debugging facilities of TPC.
First of all, switch to the release mode libraries only towards the end, when
your application is running and stable. Until then, use the debug libraries.
To compile the libraries in debug mode, use:
[tapasco@zed] ~ tapasco-build-libs --mode debug
This will enable logging in the libraries. Logging is controlled by four
environment variables:
1. `LIBPLATFORM_DEBUG`
2. `LIBPLATFORM_LOGFILE`
3. `LIBTAPASCO_DEBUG`
4. `LIBTAPASCO_LOGFILE`
The `_DEBUG` variables are a bit mask for various parts of the libraries; you
can turn on debug information selectively for each part. See
`$TAPASCO_HOME/arch/common/include/tapasco_logging.h` and
`$TAPASCO_HOME/platform/common/include/platform_logging.h` for further information.
You can simply turn on all logs by using
[tapasco@zed] ~ export LIBPLATFORM_DEBUG=-1
[tapasco@zed] ~ export LIBTAPASCO_DEBUG=-1
The `_LOGFILE` variables can be used to redirect the log output to logfiles
(instead of stdout), e.g.:
[tapasco@zed] ~ export LIBTAPASCO_LOGFILE=/home/tapasco/libtapasco.log
[tapasco@zed] ~ export LIBPLATFORM_LOGFILE=/home/tapasco/libplatform.log
Usually this level of debug information is sufficient. But in case something is
going wrong on the driver level, you can also compile the device driver in debug
mode like this:
[tapasco@zed] ~ cd $TAPASCO_HOME && ./buildLibs.py driver_debug
This will activate another bitmask in the driver; you can access it via the
sysfs file `/sys/module/tapasco_platform_zynq/parameters/logging_level`. To activate
all debug messages use:
[tapasco@zed] ~ sudo sh -c 'echo -1 > /sys/module/tapasco_platform_zynq/parameters/logging_level'
You can see the log messages in the system log, accessible via `dmesg`:
[tapasco@zed] ~ dmesg --follow
Run this command in a separate shell and you can see the log message during the
execution of your application.
**Note:** Logging at the driver level costs _a lot of performance_! It is
entirely possible that your application has different concurrent behavior with
it activated, even with `logging_level` at `0`. Always make sure to switch back
to release mode in the driver before measurements. Logging in user spaces (i.e.,
in the libraries) is not as expensive and we have tried to implement logging
with minimal runtime overhead. But the Zynq CPUs are severely limited in terms
of performance, so a performance hit will be measurable for library logging, too.
So, for benchmarking always use the release mode of driver and libraries.
We hope Tapasco is useful to you and can help to get your FPGA
research started as quickly as possible! If you use TPC in your research we
kindly ask that you cite our FCCM 2015 paper (see [2]) in your papers.
Even more importantly, let us know about issues with TPC and share your
improvements and patches with us - TPC is meant as a tool for the FPGA community
and would hugely benefit from our joint expertise. If you encounter any problems,
please check the Wiki at [3], file a bug in the bugtracker or contact us
directly via email.
Have fun!
[1]: https://cmake.org/documentation/
[2]: http://www.esa.informatik.tu-darmstadt.de/twiki/bin/view/Downloads/Tapasco.html
[3]: https://git.esa.informatik.tu-darmstadt.de/REPARA/tapasco
Tapasco -- Getting Started
=====================================
This document will walk you through an example bitstream creation with TPC.
But first we will discuss some basic terminology and explain how TPC works
in general.
Terminology
-----------
* _Platform_
Hardware platform, i.e., the basic, unchangeable environment with which
your design has to connect. Different boards will usually have different
_Platforms_ to take advantage of all available hardware components. E.g.,
there is a `zedboard` Platform for the zedboard, which has an OLED display
which other Zynq device do not have. More importantly, the _Platform_
abstracts the basic hardware substrate, i.e., access to memory and host
communication.
* _Architecture_
The basic template for your hardware thread pool, i.e., the organisation
of your _Core instances. Currently there is only one such _Architecture_
called `axi4mm`.
* _ThreadPool_
Consists of a number of _Processing Elements (PEs)_, which can all operate
simulateneously.
* _Processing Element (PE)_
A hardware IP core that performs a specific computational function. These
are the building blocks of your design in TPC. Each PE is an _instance_ of
a _Core_.
* _Core_
A custom IP core described by an IPXACT \[[2]\] description. This is the
file format the Vivado IP Integrator uses in its IP Catalog. It usually
consists of a single .zip file with a `component.xml` somewhere inside it,
which provides detailed description of all files, ports and modules of
the IP core. For TPC, a _Core_ also contains a basic evaluation report,
i.e., an estimation of the area and the worst case data path delay /
maximal frequency the core can run at, which is device-dependent; therefore
the same _Kernel_ may have many _Cores_, one for each _Platform_ +
_Architecture_ combination.
* _Kernel_
Abstract description of a _Core_. More precisely, in TPC a _Kernel_ is the
description of a custom IP core that can be built via _High-Level Synthesis_
(HLS). The HLS step will generate a _Core_ suitable for the selected
_Platform_ and _Architecture_.
Basic Operation
---------------
TPC is basically a set of scripts which provide a (slightly) more convenient
interface to Vivado Design Suite to generate hardware designs which can be
used with uniform _Application Programming Interface (API)_ called __TPC API__.
The hardware generation flow consists of a series scripts which control the
execution of the Vivado tools. TPC itself is written in Scala \[[3]\] and
primarily arranges files and data for the Vivado execution automatically.
It can automatically run Vivado HLS to generate IP cores and can perform a
primitive form of __Design Space Exploration (DSE)__, ranging over three design
parameters:
1. Design Frequency
2. Number of PEs (~ area)
3. Alternative Cores (cores with the same ID are treated as alternatives)
You can choose to optimize either or all at the same time. A word of warning:
As mentioned, this process is pretty primitive and will usually require several
complete P\&R sessions, each of which usually takes several hours to complete
(depending on your _Platform_ and _Cores_). Also note that it is not guaranteed
to find the "optimal" solution.
By default, TPC can issue __parallel builds__: The user selects a set of
_Architectures_, _Platforms_ and _Compositions_ and each combination will be
executed in parallel. __Beware of combinational explosions! It is best to select
a single _Platform_, _Architecture_ and _Composition_ until you are certain that
everything works as expected (and you have enough licenses + CPU power).__
All the entities which TPC works on/with are described by _Description Files_
in JSON format \[[1]\]. By convention, TPC will automatically scan certain
directories for the description files (see below). There exist five kinds of
description files:
1. _Kernel Descriptions_ (`kernel.description`)
These files contain a _Core_ recipe for HLS.
2. _Platform Descriptions_ (`platform.description`)
Contains basic information about a _Platform_ and links the Tcl library
that can be used to instantiate the _Platform_ in hardware. This library
builds a basic frame where the rest of the design is connected to.
3. _Architecture Descriptions_ (`architecture.description`)
Contains a basic information about a `Architecture` and links to the Tcl
library that can be used to instantiate the _Architecture_ in hardware.
4. _Composition Descriptions_ (any name)
Contains a _ThreadPool_ description, i.e., a list of _Cores_ and the number
of desired instances. Can be provided inline in the _Configuration_.
5. _Configuration Descriptions_ (any name)
Can be provided as command line arguments to `tapasco`, or (more conveniently)
in a separate file. Contains all parameters for the current _Configuration_;
the _Configuration_ determines for which _Platforms_, _Architectures_ and
_Compositions_ bitstreams shall be generated, and configures optional
_Features_ of _Platform_ and _Architecture_. It also controls the basic
execution environment, e.g., can re-configure directories etc.
Many of these description files reference other files. It is always possible to
specify absolute paths, but it is more convenient to use _relative paths_. By
convention, all relative paths are resolved relative to the location of the
description file.
Directory Structure
-------------------
All paths in TPC can be reconfigured using _Configuration_ parameters, but when
nothing else is specified, the default directory structure below `$TAPASCO_HOME` is
used:
* `arch`
Base directory for _Architectures_; will be searched for
`architecture.description`s.
* `bd`
_Output directory_ for complete hardware designs generated by TPC (generated
on first use). Organized as `<COMPOSITION NAME/HASH>/<ARCH>/<PLATFORM>`.
* `core`
_Output directory_ for _Cores_ (generated on first use); contains the TPC IP
catalog. Organized as `<KERNEL>/<ARCH>/<PLATFORM>`.
* `kernel`
Base directories for _Kernels_; will be searched for `kernel.description`s.
* `platform`
Base directory for _Architectures_; will be searched for
`architecture.description`s.
There are some more directories in `$TAPASCO_HOME`, but only TPC developers need to
concern themselves with them. As a TPC user it is sufficient to understand the
directory structure above. Each base path can be reconfigured in the
_Configuration_, which is most useful for _Kernels_, e.g., to switch between
benchmark suites.
Tutorial
--------
Finally, we can start with the tutorial itself. In this example we will produce
a bitstream containing only a single _Kernel_, an implementation of the ROT13
cipher (also called Caesar cipher). ROT13 shifts all occurrences of the 26
letters of the latin alphabet by an offset of 13 (with wrap-around). There are
documented uses of this "encryption" in the Roman Empire, where it was
(presumably) used to keep people from reading messages "over the shoulder".
We will use `itapasco` to create a configuration file for us, so start it:
1. `itapasco`
TPC should greet you with a menu similar to this:
Welcome to interactive Tapasco
*****************************************
What would you like to do?
a: Add an existing IPXACT core .zip file to the library
b: List known kernels
c: List existing cores in library
d: Build a bitstream configuration file
e: Exit
Your choice:
2. Select `d` by entering `d<RETURN>`.
Select Platform(s)[|x| >= 1]:
a ( ): vc709
b ( ): zedboard
c ( ): zynq
Your choice (press return to finish selection):
3. This is a menu that allows multiple choices; there is a constraint on your
choice that is represented by `[|x| >= 1]`, which is supposed to mean that
you have to select at least one _Platform_.
Select the zedboard _Platform_ by `c<RETURN>`:
a ( ): vc709
b ( ): zedboard
c (x): zynq
Your choice (press return to finish selection):
4. Exit the menu by `<RETURN>`:
Design Space Exploration Mode[]:
a: DesignSpaceExplorationModeNone
b: DesignSpaceExplorationModeFrequency
c: DesignSpaceExplorationModeAreaFrequency
Your choice:
5. Let\`s keep it simple, choose None via `a<RETURN>`
Select a kernel[]: Select a kernel
a: arrayinit
b: arraysum
c: arraysum-noburst
e: countdown
...
l: rot13
Your choice:
6. Next step is to build the composition, `itapasco` lists the available _Kernels_
and _Cores_, choose `rot13` via the corresponding key.
Number of instances[> 0]:
7. Choose any number > 0, e.g, `2<RETURN>`
Add more kernels to composition?[]:
a: true
b: false
Your choice:
8. `itapasco` will keep asking whether you want to add more kernels. Finish the
composition by `b<RETURN>`.
LED: Enabled[]:
a: true
b: false
Your choice:
9. Next, `itapasco` will query all currently implemented feature of the _Platform_:
`LED` means that there\`s a simple controller for the on-board LEDs to
to show the internal state (available on Zynq, VC709).
`OLED` is only available on zedboard, shows the number of interrupts that
occurred at each PE visually.
`Cache` activates a Xilinx System Cache as a sort-of L2 (doesn\`t work with
the latest version, working on it).
`Debug` adds VIO cores to the main input and output ports of the design;
currently only implemented on `zedboard`, designs are not likely to build,
but can occasionally be useful.
Answer all these questions as you like.
Enter filename for configuration[]:
10. Finally, `itapasco` asks for a file name for your configuration file. Choose
anything you like, e.g., `test.cfg`.
Run Tapasco with this configuration now?[]:
a: true
b: false
Your choice:
11. You can run Vivado directly now via `a<RETURN>`.
This process will take between 30min and 5h, depending on your choices and
generate a lot of output in between. It will mention the location of the Vivado
logfiles, you can watch them via `tail --follow <FILE>` on a separate shell,
if you like.
If everything went well, there should be a `.bit` file in
`$TAPASCO_HOME/bd/<YOUR BD>/axi4mm/zedboard` afterwards (refer to the logging
output for the value of `<YOUR BD>` - if you had used an external _Composition_
description file, it would use that name instead of the hash).
In the same directory is a subdirectory called `bit` which contains the Vivado
project. You can open it and work with it just as you would with any regular
project.
__Congratulations!__ If you reached this point, you\`ve just built your first
bitstream via TPC. That\`s it for now, continue reading in
[GETTINGSTARTED-zynq.md](GETTINGSTARTED-zynq.md) for a complete walkthrough on
the Zynq boards (zedboard, ZC06).
[1]: http://json.org
[2]: http://www.accellera.org/activities/working-groups/ip-xact
[3]: http://www.scala-lang.org
Tapasco (TPC)!
=========================
The Task Parallel System Composer (TaPaSCo)
===========================================
<img src="icon/tapasco_icon.png" alt="Tapasco logo"/>
System Requirements
-------------------
TPC is known to work in this environment:
TaPaSCo is known to work in this environment:
* Intel x86_64 arch
* Fedora 20/22 / Ubuntu 14.04
* Bash Shell 4.2.x+
* Intel x86_64 arch
* Fedora 24/25, Ubuntu 14.04/16.01
* Bash Shell 4.2.x+
Other setups likely work as well, but are untested.
Prerequisites
-------------
To use TPC, you'll need working installations of
To use TaPaSCo, you'll need working installations of
* Vivado Design Suite 2016.2 or newer
* Java SDK 7+
* sbt 0.13.x
* git
* Vivado Design Suite 2016.2 or newer
* Java SDK 7+
* sbt 0.13.x
* git
If you want to use the High-Level Synthesis flow for generating custom IP
cores, you'll also need:
* Vivado HLS 2016.2+
* Vivado HLS 2016.2+
Check that at least the following are in your `$PATH`:
* `sbt`
* `vivado`
* `git`
* `bash`
* [`vivado_hls`]
* `sbt`
* `vivado`
* `git`
* `bash`
* \[`vivado_hls`\]
Install sbt
-----------
Installing multiple versions of Java, Scala and tools like sbt can be a hassle.
[SDKman!](http://sdkman.io/) simplifies the process by managing the
installations without root requirements. To install sbt, simply
```
curl -s "https://get.sdkman.io" | bash
```
Then, run in a new terminal:
```
sdk install sbt
```
Try if this worked, via
```
sbt version
```
If `sbt` was successfully installed, it will return its version number.
Basic Setup
-------------------
1. Open a terminal in the main directory of the repository and source the TPC
setup script via `. setup.sh`.
You need to do this every time you use TPC (or put it into your `~/.bashrc`).
2. Build TPC: `sbt compile` (this may take a while, `sbt` needs to fetch all
dependencies etc. once).
1. Open a terminal in the main directory of the repository and source the
TaPaSCo setup script via `. setup.sh`.
You need to do this every time you use TaPaSCo (or put it into your
`~/.bashrc` or `~/.profile`).
2. Build TaPaSCo: `sbt compile` (this may take a while, `sbt` needs to fetch
all dependencies etc. once).
2. Create the necessary jar files with `sbt assembly`.
4. Run TPC unit tests: `sbt test`
4. Run TaPaSCo unit tests: `sbt test`
5. _Optional_: Generate sample configuration file: `tapasco -n config.json`
TaPaSCo should exit immediately and `config.json` will include a full
configuration that can be read with `--configFile`, including one example
for each kind of job.
When everything completed successfully, TaPaSCo is ready to use!
Read on in [Getting Started](GETTINGSTARTED.md).
When everything completed successfully, **TaPaSCo is ready to use!**
Acknowledgements
----------------
TaPaSCo is based on [ThreadPoolComposer][1], which was developed by us as part
of the [REPARA project][2], a _Framework Seven (FP7) funded project by the
European Union_.
We would also like to thank [Bluespec, Inc.][3] for making their _Bluespec
SystemVerilog (BSV)_ tools available to us and their permission to distribute
the Verilog code generated by the _Bluespec Compiler (bsc)_.
[1]: https://git.esa.informatik.tu-darmstadt.de/REPARA/threadpoolcomposer.git
[2]: http://repara-project.eu/
[3]: http://bluespec.com/
#!/bin/bash
VERSION=$1
TEMPBSE=/tmp/tapasco_temp
TEMPDIR=$TEMPBSE/tapasco/$VERSION
ZIP=Tapasco-$1.tar.xz
CURRDIR=`pwd`
cd $TAPASCO_HOME && cat Release-$1 | xargs tar cvJf $ZIP && pushd /tmp && mkdir -p $TEMPDIR && cd $TEMPDIR && tar xvJf $CURRDIR/$ZIP && cd ../.. && rm $CURRDIR/$ZIP && tar cvJf $CURRDIR/$ZIP tapasco && cd .. && rm -rf $TMPBSE && popd
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment