Here is information on the cluster status and on accounts

status 12th of November


You will have to manipulate your .ssh , like last year, in order to have passwordless login between the nodes. Note that this is naturally different from the keygen etc.  for the Manchester cluster. (I suggest you save  your .ssh directory which is valid for Manchester). Instructions are below.

LXSHARE_farm  Database_Servers  Accounts    passwordless_login

Database Servers:
------------------------

The MySQL server for the conditions tables, the geometry database and the Triger configuration is available to be used/tested.

note: The MySQL server for the log service will be installed next

Mail from Hans, Friday 10. November:

I've put the contents on these two servers:

  - lxmrra3801
  - pcatr07.
Both are MySQL 5 servers (see below why two**).
 
The contents on both is:
  - the conditions tables from COOL, imported from sqlite file
    /afs/cern.ch/user/a/atlcond/coolrep/sqlite130/OFLP130.db
     -  the DB is ATLAS_COOL_LST
  - the geometry tables, created from
    /afs/cern.ch/user/t/tsulaia/public/Hans/geomdb_dump_20Oct06.sql
     -  the DB is ATLASDD
  - the TrigConf database, created from scratch with the scripts
    made for that
     -  the DB is TrigConf_lst

 

There are two users for LST purposes on both servers:
  - user=lst_user, pw=lst06
    this can access all databases+tables with all privileges
  - user=atlasdd_reader, pw=reader
    can read-access ATLASDD.
 
So connect strings to COOL on our MySQLs would be as follows:
"mysql://pcatr07;schema=ATLAS_COOL_LST;user=lst_user;dbname=OFLP130;password=lst06"
"mysql://lxmrra3801;schema=ATLAS_COOL_LST;user=lst_user;dbname=OFLP130;password=lst06"

 
An appropriately extended authentication file is at 
  - /afs/cern.ch/user/h/hans/public/authentication.xml
Copy it to wherever you need. The CORAL_AUTH_PATH must contain
the path to this file.
 
Certainly DB names, user names etc. can be adapted to the needs.
 
** The new server, lxmrra3801, behaves OK with an exception that may 
affect access from Athena: the AtlCoolCopy tool does not work properly.
All other accesses work.
If you have problems with lxmrra3801, use pcatr07.
Still investigating lxmrra3801 further - no success yet...
 
The next step, I guess, would be to use one of these servers instead of
Oracle, and also to bring up DbProxy on one or both of the servers.


Accounts:
----------


For the moment you can easily work from afs like on lxplus.
If you are working on the local disks under /pool then please create a user subdirectory
under /pool/users/<your_account_name> and work there.

enabled users:
atlonl , effuser
and the user accounts
doris, haimo, hegbi, sushkov, demers, hans, rmurillo, anegri, kolos, hadavand, stelzer, mcaprini, prenkel

The user accounts are also added as  elog accounts for the LST06 elog.

If you are not on the list and you need your account enabled the please contact Doris.

Accounts should have been enabled as follows:
/pool/online      belonging to the atlonl account; writable only by atlonl - to be used for all online related tests
/pool/hlt            belonging to the effuser account; writable only by effuser - to be used for ef related tests: Infrastructure and HLT-S tests
/pool/tdaq         belonging to the atlonl account; writable only by atlonl - only for the tdaq release
/pool/users        writable by all enabled users, so everyone could created his/her  /pool/users/myname directory

The nodes: LXSHARE farm

----------------------------

4  machines running SLC3: 		lxb5306 - lxb5310
The rest of the nodes run SLC4, 32 bit
Note that the 4 machines which were running SLC4, 64 bit: lxb5435 - lxb5438 have been re-installed with SLC4, 32 bit
any other nodes (~244 in total currently) run SLC4, 32 bit.
 
-> All the user you requested have interactive access, a few have root access
-> The required directories/permissions are set in /pool with the latest
-> Users should create their sub-directories under /pool/users/<account_name> 

python 2.2.3 is not yet generally installed and it is being worked on
 
status 10.00: additional nodes available: 
lxb6509: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6510: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6511: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6512: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6513: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6514: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6515: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6516: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6517: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6518: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6519: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6520: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6521: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6522: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6523: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6524: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6525: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6526: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6527: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6528: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6529: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6530: Scientific Linux CERN SLC release 4.4 (Beryllium)
lxb6531: Scientific Linux CERN SLC release 4.4 (Beryllium)
 
Please note 
    that the first 30 nodes do not have the final partition layout and will be re-installed next week, when we will have received another 30 nodes. Currently there is  1Gbyte for /tmp and around 40 GByte or more on /pool, for a total of 70 GBytes including the system.. You may have to re-direct your log file path in the partition for the moment.

the status can be seen on LXSHARE or lxplus with the command

> wassh -c atlas_tdaq uptime

The list of nodes asigned to Atlas_tdaq and their architecture can be obtained with the command issued on those nodes or on a lxplus host:

>wassh -c atlas_tdaq uname -a

i.e. on 30th of October, the output was:
lxb5431: Linux lxb5431.cern.ch 2.6.9-42.0.3.EL.cernsmp #1 SMP Fri Oct 6 12:07:54 CEST 2006 i686 i686 i386 GNU/Linux
lxb5432: Linux lxb5432.cern.ch 2.6.9-42.0.3.EL.cernsmp #1 SMP Fri Oct 6 12:07:54 CEST 2006 i686 i686 i386 GNU/Linux
lxb5433: Linux lxb5433.cern.ch 2.6.9-42.0.3.EL.cernsmp #1 SMP Fri Oct 6 12:07:54 CEST 2006 i686 i686 i386 GNU/Linux
lxb5434: Linux lxb5434.cern.ch 2.6.9-42.0.3.EL.cernsmp #1 SMP Fri Oct 6 12:07:54 CEST 2006 i686 i686 i386 GNU/Linux
lxb5435: Linux lxb5435.cern.ch 2.6.9-42.0.3.EL.cernsmp #1 SMP Fri Oct 6 11:52:32 CEST 2006 x86_64 x86_64 x86_64 GNU/Linux
lxb5436: Linux lxb5436.cern.ch 2.6.9-42.0.3.EL.cernsmp #1 SMP Fri Oct 6 11:52:32 CEST 2006 x86_64 x86_64 x86_64 GNU/Linux
lxb5437: Linux lxb5437.cern.ch 2.6.9-42.0.3.EL.cernsmp #1 SMP Fri Oct 6 11:52:32 CEST 2006 x86_64 x86_64 x86_64 GNU/Linux
lxb5438: Linux lxb5438.cern.ch 2.6.9-42.0.3.EL.cernsmp #1 SMP Fri Oct 6 11:52:32 CEST 2006 x86_64 x86_64 x86_64 GNU/Linux

passwordless login
between our allocated LXSHARE nodes
------------------------------------------------------------

on the HOME directory, remove the directory .ssh,
create it, make sure it is not writable by someone else,
and do NOT give a password (contrary to the Manchester ssh setup)

ssh-keygen -f identity -t rsa1

(return)
(return)

cp identity.pub authorized_keys

in .ssh/config file set :
StrictHostKeyChecking no

Note: if this procedure is not effetive then run ssh2, at least once.

Note:
   if you have problems with machines requiring password login unexpectedly:
    when starting a new test session:
    get explicitely a new token on your local pc
    ssh to the lxbxxxx node
    get explicitely a new token
    then start working
  avoid going through other nodes, because the token on an intermediate node can be expired