*******************************************************************************

System administration and paramter settings:

1.  Depending on the outcome of the pre-tests we will chose SLC4 if possible, otherwise go back to SLC3.
2. ssh access without password between all nodes
3  afs access
4   1. IP port range - to be verified if this is the same for SLC4:
- execute the following command

cat /proc/sys/net/ipv4/ip_local_port_range

Currently on SLC3 the output is: 32768   61000
This is ok for us.
5  Max number of file descriptors per OS:
**- to be verified if this is the same for SLC4:**
- execute
cat /proc/sys/fs/file-max
Currently on SLC3 the output is:
52345
This is ok for us.
6  Max number of file descriptors per process
**- to be verified if this is the same for SLC4:*
- execute in bash: > ulimit -n
   or in tcsh: > limit descriptors
Currently on SLC3 the output is: 1024 . This should be set to 8196.
7.  there is no cleanup of /tmp but monitoring and email alarm if it gets low on a node
8.  all nodes are time synchronized via ntp
9. The system should be newly installed before the tests (be sure about cleanup from servers, daemons  etc. installed by previous users)
10. No automatic updates must be performed. Necessary updates can be scheduled for a fixed time/interval to be agreed upon. Details need to be discussed.
11.  all nodes are dual processors at least 2.4 GHz and 1 GByte memory
The HLT image will be a complete copy of the installation in Atlas point1. It is 30 GBytes.* We want to avoid having to build a custom image.
It is agreed that we will get at least 60 GBytes local disk space per node

12. It was agreed that the installation of the HLT image in a similar way as last year by LST (Hegoi and Haimo) via BitTorrent or Tree RSYNCwas accepted for this year too. Root access for one person for this purpose can be granted.

13. In addition to the distribution of the HLT image, there is still the option to distribute the tdaq software via RPMs like ist was done last year.
We suggest to use it for the case that we will have the nodes installed with SLC4:
Andrei has build a native version of tdaq-01-06-02 on SLC4 (no HLT and offline sw). This could then be distributed and used completely independently from the 'normal' tests. Timing could be measured and compared with the emulated version.

14. Python version 2.2.3 installed

15. system returns long hostnames for the node names

16.  Directories and Account

all directories should give read and execute access to all enabled accounts

/pool/online      belonging to the atlonl account; writable only by atlonl - to be used for all online related tests
/pool/hlt            belonging to the effuser account; writable only by effuser - to be used for ef related tests: Infrastructure and HLT-S tests
/pool/tdaq         belonging to the atlonl account; writble only by atlonl - only for the tdaq release

/pool/users     writable by all enabled users, so everyone can created his/her  /pool/users/myname directory

There are enabled user accounts as it was communicated to the owners

14.  Monitoring of the farm  and of the network

- lemon monitoring system from IT basically as last year for general node monitoring
- tdaq farm tools as used in point1
- network monitoring: to be clarified