wiki:dCacheServiceConfigVars
Last modified 6 years ago Last modified on 07/12/12 16:18:37

dCache configuration variables

The concept of dCache rests on the concept of domains and cells. Domains (one domain == one JVM) is the container for one or more services (cells). This wiki page gathers all the configuration variables and their default values. The following list of cells shows the configuration variables per cell and their description.

dCache Server 2.1.2

There are several configuration parameters that are not cell specific, they migh influence cell behavior, but on a global scale:

DCACHE


# -----------------------------------------------------------------------
# dCache default values
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for dCache
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.
#
# Many parameters appear under two different names: A legacy name
# from the old dCacheSetup file used before dCache 1.9.7, and a new
# name following a hierarchical naming scheme with dots. The legacy
# names are deprecated and will be removed at some point in the
# future.


# -----------------------------------------------------------------------
# Parameters related to dCache startup
# -----------------------------------------------------------------------


# If defined, the UID of the java process will be set. Notice that
# log files will continue to be generated with the user id that
# invoked the init script. When undefined or left blank, the UID will
# not be changed.
(not-for-services)dcache.user=dcache


# Type of namespace backend. Legal values are pnfs and chimera.
dcache.namespace=chimera


# The layout determines which domains to start.
(not-for-services)dcache.layout=single


# Base directory for layout files
(not-for-services)dcache.layout.dir=${dcache.paths.etc}/layouts


# The layout file describes the domains of a layout
(not-for-services)dcache.layout.uri=file:${dcache.layout.dir}/${dcache.layout}.conf


# Directory for PID files
(not-for-services)dcache.pid.dir=/var/run


# PID file for daemon wrapper script
(not-for-services)dcache.pid.java=${dcache.pid.dir}/dcache.${domain.name}-java.pid


# PID file for Java process
(not-for-services)dcache.pid.daemon=${dcache.pid.dir}/dcache.${domain.name}-daemon.pid


# Directory for log files
(not-for-services)dcache.log.dir=/var/log/dcache


# Path to log file
(not-for-services)dcache.log.file=${dcache.log.dir}/${domain.name}.log


# This variable describes what should be done with an existing log
# file when a domain is started. The options are either to rename
# LOGFILE to LOGFILE.old so allowing a new log file to be created, or
# to retain the log file and subsequent logging information will be
# appended.
#
# The valid values are:
# new
# keep
(not-for-services)dcache.log.mode=keep


# Logback configuration file
(not-for-services)dcache.log.configuration=file:${dcache.paths.etc}/logback.xml


# Delay, in seconds, between automatic restarts of a crashed domain
(not-for-services)dcache.restart.delay=10


# Directory used for creating the files to surpress automatic restart
(not-for-services)dcache.restart.dir=/tmp


# File used to suppress automatic restart
(not-for-services)dcache.restart.file=${dcache.restart.dir}/.dcache-stop.${domain.name}


# Java maximum heap size
(not-for-services)dcache.java.memory.heap=512m


# Java maximum direct buffer size
(not-for-services)dcache.java.memory.direct=512m


# Directory where to store heapdumps
(not-for-services)dcache.java.oom.location=${dcache.log.dir}


# Path to heap dump file
(not-for-services)dcache.java.oom.file=${dcache.java.oom.location}/${domain.name}-oom.hprof


# Extra jar files to add to the class path
(not-for-services)dcache.java.classpath=


# ---- The Library path
#
# Can contain .so libraries for JNI.
#
(not-for-services)dcache.java.library.path=${dcache.paths.lib}



# Property to allow a site to add any extra, site-specific Java VM options.
(not-for-services)dcache.java.options.extra=


# Java VM options
#
# You should not modify this property. Instead, use the appropriate
# properties to alter specific behaviour, or use dcache.java.options.extra
# to add any site-specific Java VM options.
#
# Notes:
# - wantLog4jSetup is used by eu.emi:trustmanager
(not-for-services)dcache.java.options=\

-server \
-Xmx${dcache.java.memory.heap} \
-XX:MaxDirectMemorySize=${dcache.java.memory.direct} \
-Dsun.net.inetaddr.ttl=${net.inetaddr.lifetime} \
-Dorg.globus.tcp.port.range=${net.wan.port.min},${net.wan.port.max} \
-Djava.net.preferIPv4Stack=true \
-Dorg.dcache.dcap.port=${pool.dcap.port} \
-Dorg.dcache.net.tcp.portrange=${net.lan.port.min}:${net.lan.port.max} \
-Dorg.globus.jglobus.delegation.cache.lifetime=${gsi.delegation.cache.lifetime} \
-Dorg.globus.jglobus.crl.cache.lifetime=${gsi.crl.cache.lifetime} \
-Djava.security.krb5.realm=${kerberos.realm} \
-Djava.security.krb5.kdc=${kerberos.key-distribution-center-list} \
-Djavax.security.auth.useSubjectCredsOnly=false \
-Djava.security.auth.login.config=${kerberos.jaas.config} \
-Djava.awt.headless=true \
-DwantLog4jSetup=n \
-XX:+HeapDumpOnOutOfMemoryError? \
-XX:HeapDumpPath=${dcache.java.oom.file} \
-javaagent:${dcache.paths.classes}/spring-instrument-3.0.6.RELEASE.jar \
${dcache.java.options.extra}


# The following property describes whether dCache should run under Terracotta.
# It is only supported by the srm service at this time,so do not enable it in
# dcache.conf; instead, enable it, in the layout file, for the domain the srm
# service runs within.
#
# For example:
#
# [srmDomain]
# dcache.terracotta.enabled=true
# dcache.terracotta.install.dir=/opt/terracotta
#
# [srmDomain/srm]
# [srmDomain/spacemanager]


(not-for-services)dcache.terracotta.enabled=false


# The following parameter specifies the location of Terracotta
# If dcache.terracotta.enabled is true then this must be specified as well
(not-for-services)dcache.terracotta.install.dir=


# Location of the Terracotta configuration file
(not-for-services)dcache.terracotta.config.path=${dcache.paths.etc}/tc-config.xml


# -----------------------------------------------------------------------
# Parameters related to what runs inside a domain
# -----------------------------------------------------------------------


# A batch file to execute in every domain before services are loaded.
(not-for-services)domain.preload=file:${dcache.paths.share}/cells/preload.fragment


# Directory containing service batch files (the batch files that start
# dCache cells)
domain.service.dir=${dcache.paths.share}/services


# Base URI of service batch files (the batch files that start dCache
# cells). The trailing slash is significant due to how URIs are
# resolved relative to each other.
domain.service.uri.base=file:${domain.service.dir}/


# URI to service batch file. A relative URI and path is resolved by
# searching the plugin directories. If not found, it is resolved
# relative to domain.service.uri.base.
domain.service.uri=${domain.service}.batch


# -----------------------------------------------------------------------
# Generic network related parameters
# -----------------------------------------------------------------------


# Port range used for transfers using typical WAN protocols
(not-for-services)net.wan.port.min=20000
(not-for-services)net.wan.port.max=25000


# Port range used for transfers using typical LAN protocols
(not-for-services)net.lan.port.min=33115
(not-for-services)net.lan.port.max=33145


# Java DNS cache (seconds)
(not-for-services)net.inetaddr.lifetime=1800


# -----------------------------------------------------------------------
# Protocol specific options
# -----------------------------------------------------------------------


# GSI caching parameters (ms)
(not-for-services)gsi.delegation.cache.lifetime=30000
(not-for-services)gsi.crl.cache.lifetime=60000



# ---- Kerberos Configuration
#
# Your kerberos 5 realm, used by Kerberos dcap and FTP doors
#
(not-for-services)kerberos.realm=EXAMPLE.ORG


# A comma-separated list of KDC hostnames. localhost may be used if
# a KDC multiplexer is running on the same machine as the Kerberos FTP doors.
#
kerberos.key-distribution-center-list=localhost



# Template JAAS configuration files are available in the
# share/examples/kerberos directory as jgss.conf and jgss_host.conf.
# Please copy these files into ${dcache.paths.etc} and modify their
# content as appropriate. The minimum configuration is to change
# the principle value, replacing "door.example.org" with the FQDN of
# the door and replacing "EXAMPLE.ORG" with the Kerberos Realm.
#
# The file jgss.conf is suitable for a domain running a Kerberos FTP
# door and jgss_host.conf is suitable for a domain running a Kerberos
# dcap door. Only one file may be specified per domain.
#
kerberos.jaas.config=${dcache.paths.etc}/jgss.conf
#kerberos.jaas.config=${dcache.paths.etc}/jgss.conf
#kerberos.jaas.config=${dcache.paths.etc}/jgss_host.conf



# -----------------------------------------------------------------------
# Cell Communication
# -----------------------------------------------------------------------


# ---- Which message broker implementation to use
#
# Selects between various message brokers. The message broker
# determines how dCache domains communicate with each other. Valid
# values are:
#
# 'cells' is the classic cells based system. It relies on a central
# location service that all domains connect to. The host, port and
# domain of this service is defined by broker.host, broker.port and
# broker.domain.
#
# 'amq' connects to an ActiveMQ broker.
#
# 'amq-embedded' starts an embedded ActiveMQ broker in the domain
# specified by broker.domain. For other domains this is equivalent
# as specifying 'amq'.
#
# 'cells+amq-embedded' is a hybrid broker. An embedded ActiveMQ
# broker is started in the domain specified by broker.domain. At the
# same time a classic cells location service is instantiated in the
# same domain. Thus both 'cells' and 'amq' can be used by other
# domains to connect to the broker.
#
# 'openmq' connects to an OpenMQ broker.
#
# 'cells+openmq' is a hybrid solution. A connection to an OpenMQ
# broker is established. At the same time a classic cells location
# service is instantiated in dCacheDomain. Thus both 'cells' and
# 'openmq' can be used by other domains to connect to the broker.
#
# 'none' no broker connection is establish. This is used for single
# domain deployments.
#
(not-for-services)broker.scheme=cells


# ---- Broker for interdomain communication
#
# By default both the cells and the hybrid broker styles use a star
# topology with all messages going through a central domain. This
# domain is usually dCacheDomain, but any domain can be used.
#
# As all other domains need to connect to the broker, broker.host
# has to be configured throughout the dCache instance unless the
# broker runs on the local host or if there is no broker.
#
# Domains open a UDP port to listen for topology information. The
# information is sent from the broker.domain domain. The port
# number that a domain listens for topology information is
# configured by the broker.client.port property. This is either
# the port number or 0 (indicating a randomly chosen port number).
#
# NOTE: broker.client.port must be EITHER a unique port number OR
# 0. This means that it is almost certainly wrong to configure this
# property anywhere other than in a domain's context (i.e., immediately
# after declaring a domain).
#
# Inter-domain messages are sent via TCP on the port defined by
# broker.messaging.port. Since topology discovery uses UDP,
# broker.port and broker.messaging.port may have the same port
# number.
#
(not-for-services)broker.domain=dCacheDomain
(not-for-services)broker.host=localhost
(not-for-services)broker.port=11111
(not-for-services)broker.messaging.port=${broker.port}
(not-for-services)broker.client.port = 0


# ---- Location of location manager configuration file
#
# Only used with broker.scheme=cells and only by the
# ${broker.domain} domain. If the file doesn't exist then a default
# 'star' topology is used, where the ${broker.domain} domain accepts
# connections from all other domains and routes messages
# accordingly.
#
# If the ${broker.cells.config} file exists then it is read by the
# lmd cell running in ${broker.domain} on startup. This allows
# site-specific adjustments to the messaging topology.
#
# Please note that adjusting the messaging topology is an advance
# feature that few (if any) dCache deployments need to adjust.
# Using a different messaging technology may be a preferable
# solution; see broker.scheme property for the alternatives.
#
# The user ${dcache.user} must be able to write into the directory
# in which the file is located for the 'setup write' command of
# location manager cell (lmd) to work.
#
(not-for-services)broker.cells.config=${dcache.paths.etc}/lm.config


# ---- Port and host used for ActiveMQ broker
#
# Determines the host and port used for the ActiveMQ broker. The
# host defaults to ${broker.host}. Only used if messageBroker is set
# to either jms or hybrid.
#
(not-for-services)broker.amq.host=${broker.host}
(not-for-services)broker.amq.port=11112
(not-for-services)broker.amq.ssl.port=11113


# ---- Connection URL for ActiveMQ
#
# By default, the ActiveMQ connection URL is formed from
# broker.amq.host and broker.amq.port properties. The broker.amq.url
# property may be used to configure more advanced broker
# topologies. Consult the ActiveMQ documentation for possible
# values.
#
(not-for-services)broker.amq.url=failover:tcp://${broker.amq.host}:${broker.amq.port}


# ----- OpenMQ broker host
(not-for-services)broker.openmq.host=${broker.host}


# ----- OpenMQ broker port
(not-for-services)broker.openmq.port=11112


# ----- OpenMQ interval in milliseconds between connection attempts
(not-for-services)broker.openmq.reconnect.interval=30000


# -----------------------------------------------------------------------
# Cell naming
# -----------------------------------------------------------------------


httpdoor/cell.name=HTTP-${host.name}
info/cell.name=info
nfsv41/cell.name=NFSv41-${host.name}
statistics/cell.name=PoolStatistics?
loginbroker/cell.name=LoginBroker?


# -----------------------------------------------------------------------
# Cell addresses of major dCache components
# -----------------------------------------------------------------------


pnfsmanager=PnfsManager
poolmanager=PoolManager
loginBroker=LoginBroker?


# -----------------------------------------------------------------------
# Login broker
# -----------------------------------------------------------------------
#
# A login broker maintains a list of doors in dCache. Each door is
# configured to register with zero or more login brokers. By default
# all doors other than the SRM door register with a single central
# login broker.
#


# ---- How often a door register with its login brokers
#
# The time in seconds between two registrations.
#
loginBrokerUpdateTime=5


# ---- Threshold for load changes in a door to trigger reregistration
#
# The registration with a login broker contains information about
# the current load of a door. If the load changes rapidly, then a
# door may updates its registration before the next scheduled update
# time. This parameter specifies the fraction of the load that
# triggers a reregistration.
loginBrokerUpdateThreshold=0.1


# -----------------------------------------------------------------------
# Components
# -----------------------------------------------------------------------




# ---- SSL Server certificate
#
# This parameter specifies the path to the file containing the
# PKCS12 encoded server certificate used for SSL. The host certificate
# in /etc/grid-security/ needs to be converted to PKCS12 format before
# it can be used with SSL. Use the 'bin/dcache import
# hostcert' command to perform this task. This is used in Webadmin and WebDAV
#
# Notice that for GSI the host cetificate in /etc/grid-security/ is used
# directly.
#
keyStore=${dcache.paths.etc}/hostcert.p12


# ---- Password for SSL server certificate
#
# This parameter specifies the password with which the PKCS12 encoded
# server certificate is encrypted.
#
keyStorePassword=dcache


# ---- Trusted SSL CA certificates
#
# This parameter specifies the path to a Java Keystore containing
# the the trusted CA certicates used for SSL. The CA certificates
# in /etc/grid-security/certificates/ need to be converted into a
# Java Keystore file before they can be used with SSL. Use the
# 'bin/dcache import cacerts' command to perform this task.
# This is used in Webadmin and WebDAV.
#
# Notice that for GSI the CA cetificates in
# /etc/grid-security/certificates/ are used directly.
#
trustStore=${dcache.paths.etc}/certificates.jks


# ---- Password for trusted SSL CA certificates
#
# This parameter specifies the password with which the Java Keystore
# containing the trusted CA certificates is encrypted.
#
trustStorePassword=dcache



# -----------------------------------------------------------------------
# Filesystem Locations
# -----------------------------------------------------------------------



# ---- SRM/GridFTP authentication file
#
# Do not change unless you know what you are doing.
#
# An example file is located in share/examples/gPlazma directory. Copy
# this file into ${dcache.paths.etc} directory and modify as appropriate.
#
kpwdFile=${dcache.paths.etc}/dcache.kpwd






# -----------------------------------------------------------------------
# common to gsiftp and srm
# -----------------------------------------------------------------------


# ---- Whether implicit space reservations should be enabled.
#
# The following variable will have no effect unless the SRM Space
# Manager is enabled.
#
#srmImplicitSpaceManagerEnabled=yes


overwriteEnabled=false


# ---- Host certificate refresh period in seconds.
#
# This option influences in which intervals the host certificate will be
# reloaded on a running door. Currently supported by the SRM door.
#
hostCertificateRefreshPeriod=43200


# ---- Trust anchor refresh period in seconds.
#
# Grid-based authentication usually requires to load a set of certificates
# that are accepted as certificate authorities. This option influences in
# which interval these trust anchors are reloaded.
# Currently supported by the SRM door.
#
trustAnchorRefreshPeriod=43200



# -----------------------------------------------------------------------
# Network Configuration
# -----------------------------------------------------------------------


sshPort=22124
# Telnet is only started if the telnetPort line is uncommented.
# This should be for debug use only.
#telnetPort=22123



#
# Various components can bind to a particular network interface. The value
# of the listen property describes which interface a door should use. The
# value is the IP address of the interface the component should use; for
# example, the loop-back interface (commonly 'lo') is '127.0.0.1' for IPv4,
# '::1' for IPv6. An empty value means the door will listen on all
# interfaces.
#
listen=


# -----------------------------------------------------------------------
# Database Configuration
# -----------------------------------------------------------------------
#
# The current setup assumes that one or more PostgreSQL servers are
# used by the various dCache components. Database user and database
# password are configurable. The dCache components use the databases 'dcache',
# 'replicas', 'companion' and 'billing'. However, these might be located on
# separate hosts.
#
# The most performant configuration is to have the database server
# running on the same host as the dCache component that will
# access it. Therefore, the default value for all the following
# variables is 'localhost'. Uncomment and change these variables
# only if you have a reason to deviate from this scheme.
#
# For example, one valid deployment would be to put the 'billing'
# database on different host than the pnfs server database and
# companion, but keep the httpDomain on the admin host.


# ---- Whether to manage database schemas automatically
#
# When true, database schemas will be automatically updated when
# needed. Not all services support this setting. This settings
# applies to a complete domain and must not be defined at the
# service level.
db.schema.auto=true


# ---- pnfs Companion Database Host
#
# Do not change unless you know what you are doing.
#
# Database name: companion
#
#companionDatabaseHost=localhost


# ---- pnfs Manager interface to Deletion Resigistration Configuration
#
# Deletion Registration functionality in pnfs, when enabled, creates a
# record of each file deletion in pnfs namespace. Dcache does not delete
# precious or online data files in pools if a deletion registration record
# is not present. Usage of trash database is recommended in case of the
# pnfs namespace, as pnfs can report files as not being found when some
# components of pnfs are not running. This issue does not affect Chimera.
#
# There are two ways to connect to the database containing
# registration of deletions of files in pnfs namespace, direct
# database or through a special ".()()" file in pnfs nfs interface
# To configure access though pnfs nfs set the value of
# pnfsDeleteRegistration to pnfs:
# To configure dirrect access to postgres, set pnfsDeleteRegistration
# to jdbc url jdbc:postgresql://localhost/trash and set
# pnfsDeleteRegistrationDbUser and pnfsDeleteRegistrationDbPass to
# postgres user name to password values.
# Set value to "" to disable usage the registration of deletion
# Default value pnfsDeleteRegistration is "pnfs:"
#
#
#pnfsDeleteRegistration=jdbc:postgresql://localhost/trash
#pnfsDeleteRegistration=pnfs:
#pnfsDeleteRegistrationDbUser=srmdcache
#pnfsDeleteRegistrationDbPass=



# -----------------------------------------------------------------------
# Directory Pools
# -----------------------------------------------------------------------
#
#directoryPoolPnfsBase=/pnfs/fs



# ------------------------------------------------------------------------
# Statistics module
# ------------------------------------------------------------------------


# ---- Directory for storing statistics.
#
# This is the directory under which the statistics module will
# store historic data.
#
statisticsLocation=${dcache.paths.statistics}



# -----------------------------------------------------------------------
# Tape protection
# -----------------------------------------------------------------------
#
# The tape protection feature is only available if
# stageConfigurationFilePath line is defined, and there is a
# similarly named file containing a list of FQANs and DNs whose
# owners are allowed to stage files (i.e., to read files from dCache
# that are stored only on tape).
#
# Stage configuration can be provided either on the door or on the
# PoolManager as described in the following two cases below:
#
# 1) stage configuration provided on the door
# (remember to repeat the same configuration on each door):
# stagePolicyEnforcementPoint=doors
# 2) stage configuration provided on the PoolManager:
# stagePolicyEnforcementPoint=PoolManager
#
stageConfigurationFilePath=
stagePolicyEnforcementPoint=doors



# -----------------------------------------------------------------------
# Provide information about message broker
# -----------------------------------------------------------------------
#
# The following properties provide information about the broker
# domain. The actual domain is defined by broker.domain.
#
(immutable)broker.net.ports.tcp-when-scheme-is-cells=${broker.messaging.port}
(immutable)broker.net.ports.udp-when-scheme-is-cells=${broker.port} ${broker.client.port}
(immutable)non-broker.net.ports.tcp-when-scheme-is-cells=
(immutable)non-broker.net.ports.udp-when-scheme-is-cells=${broker.client.port}


(immutable)broker.net.ports.tcp-when-scheme-is-amq=
(immutable)broker.net.ports.udp-when-scheme-is-amq=
(immutable)non-broker.net.ports.tcp-when-scheme-is-amq=
(immutable)non-broker.net.ports.udp-when-scheme-is-amq=


(immutable)broker.net.ports.tcp-when-scheme-is-amq-embedded=${broker.amq.port} ${broker.amq.ssl.port}
(immutable)broker.net.ports.udp-when-scheme-is-amq-embedded=
(immutable)non-broker.net.ports.tcp-when-scheme-is-amq-embedded=
(immutable)non-broker.net.ports.udp-when-scheme-is-amq-embedded=


(immutable)broker.net.ports.tcp-when-scheme-is-cells+amq-embedded=${broker.amq.port} ${broker.amq.ssl.port} ${broker.messaging.port}
(immutable)broker.net.ports.udp-when-scheme-is-cells+amq-embedded=${broker.port} ${broker.client.port}
(immutable)non-broker.net.ports.tcp-when-scheme-is-cells+amq-embedded=
(immutable)non-broker.net.ports.udp-when-scheme-is-cells+amq-embedded=${broker.client.port}


(immutable)broker.net.ports.tcp-when-scheme-is-openmq=${broker.openmq.port}
(immutable)broker.net.ports.udp-when-scheme-is-openmq=
(immutable)non-broker.net.ports.tcp-when-scheme-is-openmq=
(immutable)non-broker.net.ports.udp-when-scheme-is-openmq=


(immutable)broker.net.ports.tcp-when-scheme-is-cells+openmq=${broker.openmq.port} ${broker.messaging.port}
(immutable)broker.net.ports.udp-when-scheme-is-cells+openmq=${broker.port} ${broker.client.port}
(immutable)non-broker.net.ports.tcp-when-scheme-is-cells+openmq=
(immutable)non-broker.net.ports.udp-when-scheme-is-cells+openmq=${broker.client.port}


(immutable)broker.net.ports.tcp-when-scheme-is-none=
(immutable)broker.net.ports.udp-when-scheme-is-none=
(immutable)non-broker.net.ports.tcp-when-scheme-is-none=
(immutable)non-broker.net.ports.udp-when-scheme-is-none=


(immutable)broker.net.ports.tcp=${broker.net.ports.tcp-when-scheme-is-${broker.scheme}}
(immutable)broker.net.ports.udp=${broker.net.ports.udp-when-scheme-is-${broker.scheme}}
(immutable)non-broker.net.ports.tcp=${non-broker.net.ports.tcp-when-scheme-is-${broker.scheme}}
(immutable)non-broker.net.ports.udp=${non-broker.net.ports.udp-when-scheme-is-${broker.scheme}}



#
# The following properties are Obsolete or Forbidden.
#
(obsolete)useFilesystem=
(obsolete)srmVacuum=Use PostgreSQL auto vacuuming instead
(obsolete)srmVacuumPeriod=Use PostgreSQL auto vacuuming instead
(forbidden)java=Define JAVA_HOME in /etc/dcache.env or /etc/default/dcache
(forbidden)java_options=See dcache.java.options or dcache.java.options.extra
(obsolete)bufferSize=Tune TCP setting in the OS instead
(obsolete)tcpBufferSize=Tune TCP setting in the OS instead
(obsolete)maintenanceLibPath=Maintenance module is no longer supported
(obsolete)maintananceLibAutogeneraetPaths=Maintenance module is no longer supported
(obsolete)maintenanceLogoutTime=Maintenance module is no longer supported
(obsolete)srmVersion=
(obsolete)permissionHandler=
(obsolete)PermissionHandlerDataSource?=
(forbidden)user=use dcache.user instead
(forbidden)pidDir=use dcache.pid.dir instead
(forbidden)logArea=use dcache.log.dir instead
(forbidden)logMode=use dcache.log.mode instead
(forbidden)classpath=use dcache.java.classpath instead
(forbidden)librarypath=use dcache.java.library.path instead
(forbidden)kerberosRealm=use kerberos.realm instead
(forbidden)kerberosKdcList=use kerberos.key-distribution-center-list instead
(forbidden)authLoginConfig=use kerberos.jaas.config instead
(forbidden)messageBroker=use broker.scheme instead
(forbidden)serviceLocatorHost=use broker.host instead
(forbidden)serviceLocatorPort=use broker.port instead
(forbidden)amqHost=use broker.amq.host instead
(forbidden)amqPort=use broker.amq.port instead
(forbidden)amqSSLPort=use broker.amq.ssl.port instead
(forbidden)amqUrl=use broker.amq.url instead
(forbidden)ourHomeDir=use dcache.home instead
(forbidden)portBase=set protocol-specific default ports directly



Paths are also not cell related configuration parameters:

PATHS


# Various paths used by dCache shell scripts and configuration
# defaults.
#
# These parameters may change in future versions. Avoid redefining
# them.
#
# Note that, by default, dcache.home is auto-detected. This will work
# for standard deployments, including the RPM file, so you do not need
# to specify dcache.home.
#
# You can override the auto-detected value by specifying the DCACHE_HOME
# variable in either the /etc/defaults/dcache or /etc/dcache.env file.
# Additionally, the value may be specified with the -d option to the
# dcache command.


dcache.paths.share=${dcache.home}
dcache.paths.share.lib=${dcache.paths.share}/lib
dcache.paths.config=/var/lib/dcache/config
dcache.paths.etc=/etc/dcache
dcache.paths.bin=/usr/bin
dcache.paths.lock.file=/var/run/subsys/cache
dcache.paths.pnfs=/pnfs
dcache.paths.classes=${dcache.home}/classes
dcache.paths.lib=/usr/lib/dcache
dcache.paths.billing=/var/lib/dcache/billing
dcache.paths.statistics=/var/lib/dcache/statistics
dcache.paths.plugins=/usr/share/dcache/plugins:/usr/local/share/dcache/plugins
dcache.paths.setup=/etc/dcache/dcache.conf
dcache.paths.classpath=${dcache.java.classpath}:${dcache.paths.classes}/\*


#
# Obsolete or Forbidden properties
#
(forbidden)config=use dcache.paths.config instead
(forbidden)classesDir=use dcache.paths.classes instead



Cell specific configuration parameters and their explanation

ACL


# -------------------------------------------------------------------
# ACL Configuration
# -------------------------------------------------------------------
#
# This Java properties file contains default values for ACL
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.
#
# -------------------------------------------------------------------
#
# ACLs in dCache follow the NFS4 specification. When enforcing file
# permissions, dCache will first consult the ACLs. If a request can
# neither be granted nor denied based on the ACLs, then dCache falls
# back to file mode based permission checking.
#
# ACLs are stored in a database. By default, the table 't_acl' is
# used in database 'chimera' on 'localhost'. If Chimera is deployed
# with the 'chimera' database on the same machine as ChimeraDomain?
# then the table will already exist. If PNFS is deployed then a
# suitable table must be created and the acl variables adjusted
# accordingly.
#
# The database containing the ACL table only needs to be accessible
# from the machines running the pnfsmanager and acl services, and
# ACLs only have to be enabled and configured for those services.
#
# To enable ACLs, set aclEnabled to true. If the database containing
# the ACL table is on a different host or in a different database
# than 'chimera', then configure aclConnUrl to point to the correct
# database.


acl/cell.name=acladmin


# ---- Enabled ACL support
#
# Set to true to enable ACL support.
#
aclEnabled=false


# ---- ACL database parameters
#
# These paramters define the database connection parameters for ACL.
#
(obsolete)aclTable=
(obsolete)aclConnDriver=use chimera.db.driver instead
(obsolete)aclConnUrl=use chimera.db.url instead
(obsolete)aclConnUser=use chimera.db.user instead
(obsolete)aclConnPswd=use chimera.db.password instead


#
# Database related settings reserved for internal use.
#
(obsolete)acl/db.host=use chimera.db.host instead
(obsolete)acl/db.name=use chimera.db.name instead
(obsolete)acl/db.user=use chimera.db.user instead
(obsolete)acl/db.password=use chimera.db.password instead
(obsolete)acl/db.driver=use chimera.db.driver instead
(obsolete)acl/db.url=use chimera.db.url instead
(obsolete)acl/db.schema.auto=



ADMIN


# -----------------------------------------------------------------------
# Default values for admin doors
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for admin
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- LoginManager? name
#
# The name of the LoginManager? that listens for incoming SSH
# connections
#
admin/cell.name=alm
admin/cell.nameSsh2=ssh2Admin


# ---- Which ssh version shall be used for the admin interface
#
# Possible values are ssh1 or ssh2 or both
sshVersion=both


# ---- TCP port
#
adminPort=22223
admin/port=${adminPort}
# ssh2 Admin port
admin.ssh2AdminPort=22224


# ---- Admin door history file
#
# The admin door can store a command history in a file. This makes
# the history persistent over multiple logins. To enable this
# feature, set adminHistoryFile to the path of the file that should
# be used to store the history. The recommended path is
# /var/opt/d-cache/adminshell_history. Notice that missing
# directories are not created automatically.
#
adminHistoryFile=


# ---- Location of the ssh keys
#
# Do not change unless you know what you are doing.
#
dcache.paths.ssh-keys=/etc/dcache/admin


# ---- Whether to use ANSI colors or not
#
# When set to true ANSI codes will be used to add colors to the
# admin shell.
#
admin.colors.enable=true


knownUsersFile=${dcache.paths.ssh-keys}/authorized_keys
serverKeyFile=${dcache.paths.ssh-keys}/server_key
hostKeyFile=${dcache.paths.ssh-keys}/host_key
AccessControlCell?=acm
userPasswordFile=cell:${AccessControlCell?}


# ---- These value are needed and can be set when using the ssh2 admin interface
#
# ---- Authorized_keys list location
#
# Defines the location of the authorized public keys
#
admin.authorizedKey2=${dcache.paths.ssh-keys}/authorized_keys2


# ---- Hostkey location
#
# Defines the location of the ssh2 server host keys
#
admin.dsaHostKeyPrivate=${dcache.paths.ssh-keys}/ssh_host_dsa_key
admin.dsaHostKeyPublic=${dcache.paths.ssh-keys}/ssh_host_dsa_key.pub


#
# Document which TCP ports are opened
#
(immutable)admin/listening-ports-when-sshVersion-is-ssh1=${port}
(immutable)admin/listening-ports-when-sshVersion-is-ssh2=${admin.ssh2AdminPort}
(immutable)admin/listening-ports-when-sshVersion-is-both=${port} ${admin.ssh2AdminPort}
(immutable)admin/net.ports.tcp=${listening-ports-when-sshVersion-is-${sshVersion}}


#
# Obsolete and Forbidden properties
#
(forbidden)keyBase=adjust dcache.paths.ssh-keys property instead



BILLING


# -----------------------------------------------------------------------
# Default values for billing
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for billing
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- Cell name of billing service
#
billing/cell.name=billing


# ---- Directory for billing logs
#
# The directory within which the billing logs are to be written.
#
billingLogsDir=${dcache.paths.billing}


# ---- Disable billing to plain text file
#
# Controls whether dCache activity is logged as plain text files. Valid
# values are 'true' and 'false'. If the property is set to 'false' then
# activity is logged in text files. If set to 'true' then there is no
# logging to text files.
#
billingDisableTxt=false


# -----------------------------------------------------------------------
# Format of billing entries in plain text billing files
# -----------------------------------------------------------------------


# The billing cell receives billing messages from various components
# in dCache. Each message is logged to a plain text file using a
# configurable format.
#
# In its simplest form, the format string contains placeholders using
# the syntax $attribute$, where attribute is the name of an attribute
# in the message. The attribute names of each message is listed
# below. Messages inherited attributes from messages they extend.
#
# Each attribute has a type. Some types may expose additional fields.
# The syntax for accessing a field is $attribute.field$, where field
# is a field of the attribute. The field may itself have additional
# fields. The available fields of a number of types is listed below.
#
# Some types may have alternate renderings using a format string. The
# syntax for specifying a format string is $attribute; format="..."$,
# where ... is a type specific format string. Similarly, for
# collection types (eg arrays), the separator symbol can be specified
# as $attribute; separator="..."$, where ... is the separator.
#
# For advanced customizing, consult the StringTemplate? v3
# documentation at
#
# http://www.antlr.org/wiki/display/ST/StringTemplate+3+Documentation
#
# or the cheat sheet at
#
# http://www.antlr.org/wiki/display/ST/StringTemplate+cheat+sheet
#
#
# Message: InfoMessage?
# --------------------
#
# Attribute Type Description
# --------- ---- -----------
#
# date Date Time stamp of mesage
# cellName String Name of cell submitting the message
# cellType String Type of cell submitting the message
# type String Request type
# rc Integer Result code
# message String Message (usually error message)
# queuingTime Long Time request was queued (milliseconds)
# subject Subject Identity of user given as a collection of
# principals (uid, gid, FQAN, DN, Username,
# Kerberos, Client-IP)
#
# Message: PnfsFileInfoMessage? extends InfoMessage?
# ------------------------------------------------
#
# Attribute Type Description
# --------- ---- -----------
#
# pnfsid PnfsId? PNFS id of file
# path String File path
# filesize Long File size (bytes)
# storage StorageInfo? Storage info of file
#
# Message: MoverInfoMessage? extends PnfsFileInfoMessage?
# -----------------------------------------------------
#
# Attribute Type Description
# --------- ---- -----------
#
# transferred Long Bytes transferred
# connectionTime Long Time client was connected (milliseconds)
# created Boolean True on upload, false on download
# protocol ProtocolInfo? Protocol related information
# initiator String Name of cell that initiated the transfer
#
# Message: DoorRequestInfoMessage? extends PnfsFileInfoMessage?
# -----------------------------------------------------------
#
# Attribute Type Description
# --------- ---- -----------
# transactionTime Long Duration of operation (milliseconds)
# uid Integer UID of user
# gid Integer GID of user
# owner String DN or user name
# client String Client IP address
#
#
# Message: StorageInfoMessage? extends PnfsFileInfoMessage?
# -----------------------------------------------------------
#
# Attribute Type Description
# --------- ---- -----------
#
# transferTime Long Duration of operation (milliseconds)
#
#
# Message: RemoveFileInfoMessage? extends InfoMessage?
# --------------------------------------------------
#
# No additional attributes.
#
#
# Type: Date
# ----------
#
# By specifying $date; format="yyyy.MM.dd HH:mm:ss:SSS"$ the date
# and time will be formatted respecting the given pattern
# "yyyy.MM.dd HH:mm:ss:SSS". Any other date pattern can be choosen
# according to the java API SimpleDateFormat? class. The default
# pattern is for the parameter $date$ is "MM.dd HH:mm:ss".
#
#
# Type: ProtocolInfo?
# ------------------
#
# Field Type Description
# ----- ---- -----------
#
# protocol String Protocol name (as used in pool manager)
# minorVersion Integer Minor version of protocol
# majorVersion Integer Major version of protocol
# socketAddresss InetSocketAddress? IP address and port of client
#
# Type: StorageInfo?
# -----------------
#
# Field Type Description
# ----- ---- -----------
#
# storageClass String The storage class of the file
# hsm String HSM instance
# locations URI[] Tape locations
# fileSize Long File size in bytes
# stored Boolean True when stored on tape, false otherwise
# retentionPolicy RetentionPolicy? Retention policy of file
# accessLatency AccessLatency? Access latentcy of file
# map Map<Sting,String> Additional info as key-value pairs
#
# Type: Subject
# -------------
#
# Field Type Description
# ----- ---- -----------
#
# dn String Distinguished name
# uid Integer User id
# primaryGid Integer Primary group id
# gids Integer[] Group ids
# primaryFqan String First FQAN (Fully Qualified Attribute Names
# used by VOMS)
# fqans String[] FQANs (unsorted)
# userName String Mapped user name
# loginName String Login name
#
# Type: PnfsId?
# ------------
#
# Field Type Description
# ----- ---- -----------
# databaseId Integer Database ID (first two bytes of PNFS ID)
# domain String
# id String String form of PNFS ID
# bytes byte[] Binary form of PNFS ID
#



# ---- MoverInfoMessage?
#
# Submitted by pools for each file transfer.
#
billing.format.MoverInfoMessage?=$date$ [$cellType$:$cellName$:$type$] [$pnfsid$,$filesize$] [$path$] $if(storage)$$storage.storageClass$@$storage.hsm$$else$<Unknown>$endif$ $transferred$ $connectionTime$ $created$ {$protocol$} [$initiator$] {$rc$:"$message$"}


# ---- RemoveFileInfoMessage?
#
# Submitted by PnfsManager on file removal.
#
billing.format.RemoveFileInfoMessage?=$date$ [$cellType$:$cellName$:$type$] [$pnfsid$,$filesize$] [$path$] $if(storage)$$storage.storageClass$@$storage.hsm$$else$<Unknown>$endif$ {$rc$:"$message$"}


# ---- DoorRequestInfoMessage?
#
# Submitted by doors for each file transfer.
#
billing.format.DoorRequestInfoMessage?=$date$ [$cellType$:$cellName$:$type$] ["$owner$":$uid$:$gid$:$client$] [$pnfsid$,$filesize$] [$path$] $if(storage)$$storage.storageClass$@$storage.hsm$$else$<Unknown>$endif$ $transactionTime$ $queuingTime$ {$rc$:"$message$"}


# ---- StorageInfoMessage?
#
# Submitted by pools for each flush to and fetch from tape.
#
billing.format.StorageInfoMessage?=$date$ [$cellType$:$cellName$:$type$] [$pnfsid$,$filesize$] [$path$] $if(storage)$$storage.storageClass$@$storage.hsm$$else$<Unknown>$endif$ $transferTime$ $queuingTime$ {$rc$:"$message$"}


# -----------------------------------------------------------------------
# Store billing data in database
## -----------------------------------------------------------------------


# This property describes whether the billing information should be
# written to a PostgreSQL database. Valid values are 'no' and 'yes'.
#
# When this property is set to 'yes' then billing will write dCache
# billing information into a database. The database must be created
# manually but dCache will manage the creation and evolution of
# tables within this database.
#
# As an example, the following two commands instructs PostgreSQL to
# create the database 'billing' and allow user 'srmdcache' to access
# it:
#
# createdb -O srmdcache -U postgres billing
# createlang -U srmdcache plpgsql billing
#
billingToDb=no


# ---- Use DAO access layer to persistence
#
billingInfoAccess=org.dcache.services.billing.db.impl.datanucleus.DataNucleusBillingInfo?


# ---- If this is set, it overrides the jar-resident configuration resource
# for the DAO implementation, if any
billingInfoAccessPropertiesFile=


# ---- Commit optimizations: in-memory caching thresholds
#
billingMaxInsertsBeforeCommit=10000
billingMaxTimeBeforeCommitInSecs=5


# ---- liquibase changelog
billingChangelog=org/dcache/services/billing/db/sql/billing.changelog-master.xml


# ---- liquibase update
updateBillingDb=true


# ---- pool manager
poolManager=PoolManager


# ---- communcation timeout
poolConnectTimeout=3600000


# ---- RDBMS/JDBC Driver
#
billingDbDriver=org.postgresql.Driver


# ---- RDBMS/JDBC URL
#
billingDbUrl=jdbc:postgresql://${billingDbHost}/${billingDbName}


# ---- RDBMS/JDBC Database host name
#
billingDbHost=localhost


# ---- RDBMS/JDBC Database user name
#
billingDbUser=srmdcache


# ---- RDBMS/JDBC Database user password
#
billingDbPass=srmdcache


# The following enables using pgfile, which is disabled by default
# billingDbPgPassFileName=/root/.pgpass
#
billingDbPgPassFileName=


# ---- Database name
#
billingDbName=billing


# ---- If this is set, it overrides the jar-resident configuration resource
# for ITimeFramePlot, if any
billingPlotPropertiesFile=


#
# Database related settings reserved for internal use.
#
billing/db.name=${billingDbName}
billing/db.user=${billingDbUser}
billing/db.host=${billingDbHost}
billing/db.password=${billingDbPass}
billing/db.driver=${billingDbDriver}
billing/db.url=${billingDbUrl}
billing/db.schema.auto=false


CHIMERA


# -----------------------------------------------------------------------
# Default values for Chimera namespace DB configuration
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for Chimera DB
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- Chimera database name
#
chimera.db.name = chimera


# ---- Chimera database host name
#
chimera.db.host = localhost


# ---- URL of db connection
#
chimera.db.url = jdbc:postgresql://${chimera.db.host}/${chimera.db.name}?prepareThreshold=3


# ---- Database user name
#
chimera.db.user = chimera


# ---- Database user password
#
chimera.db.password =


# ---- Database dialect (vendor)
#
# Known dialects:
# PgSQL : for PostgreSQL >= 8.1
# HsqlDB : for Hsql DB >= 2.0.0
#
chimera.db.dialect = PgSQL


# ---- JDBC driver class name.
#
# Database specific. Please consult you DB documentation for details.
#
chimera.db.driver = org.postgresql.Driver





CLEANER


# -----------------------------------------------------------------------
# Default values for Cleaner
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for the cleaner
# service. All values can be redefined in etc/dcache.conf. Do not
# modify any values here as your changes will be lost when you next
# upgrade.
#
# The cleaner is the component that watches for files being deleted
# in the namespace. There must be at most one cleaner per dCache
# instance. When files are deleted, the cleaner will notify the
# pools that hold a copy of the deleted files' data and tell the
# pools to remove that data. Optionally, the cleaner can instruct
# HSM-attached pools to remove copies of the file's data stored on
# tape.
#
# The cleaner runs periodically, so there may be a delay between a
# file being deleted in the namespace and the corresponding deletion
# of that file's data in pools and on tape.
#
# The cleaner maintains a list of pools that it was unable to
# contact: pools are either offline or sufficiently overloaded that
# they couldn't respond in time. The cleaner will peroidically try
# to delete data on pools in this list but, between such retries,
# these pools are excluded from cleaner activity.
#
# There are actually two cleaners: one for PNFS and one for Chimera.
# The correct cleaner is selected automatically. Unless otherwise
# stated the properties here apply to both cleaners.
#
# The PNFS cleaner must run on the same machine as PNFS. The Chimera
# cleaner doesn't suffer from this limitation.


cleaner/cell.name=cleaner


# ---- Cleaner thread count (Chimera only)
#
# The number of threads in the cleaner's thread-pool. This
# thread-pool is used for all cleaner activity.
#
cleaner.thread-pool.size = 5


# ---- Period of successive run
#
# The time, in seconds, between successive cleaner runs.
#
# [The default value for PNFS Cleaner is 90]
#
cleaner.period = 120


# ---- Pool communication time-out
#
# The time, in seconds, that the cleaner will wait after sending a
# message to a pool for that pool's reply. If no reply is received
# in time then the cleaner places this pool in the offline pools
# list.
#
cleaner.pool-reply-timeout = 100


# ---- Unavailable pool retry time
#
# The time between successive attempts to clean files from a pool
# should the pool fail to respond to cleaner requests.
#
# For the PNFS cleaner the value is in minutes; for Chimera cleaner
# the value is in seconds.
#
# [For PNFS, the default value is 3600]
#
cleaner.pool-retry = 1800


# ---- Report to cell
#
# The cleaner will send a message indicating that files' data have
# been removed from pools. If the chimera.cleaner.report-cell
# property is 'none' then no messages are sent otherwise this
# property contains the name of the cell these messages should be
# sent to.
#
cleaner.report-cell=broadcast


# ---- Maximum files in one message
#
# For each pool, the cleaner produces a list of all deleted files
# that have data stored on that pool. When instructing a pool to
# remove data, the cleaner includes as many files as possible in the
# message.
#
# The chimera.cleaner.max-files-in-message property places an upper
# limit on the number of files' data to be deleted in a message. If
# more than this number of files are to be deleted then the pool will
# receive multiple messages.
#
cleaner.max-files-in-message = 500


# ---- Trash Location Directory (PNFS only)
#
# Location of the PNFS trash directory. This is the PNFS directory
# in which information about deleted files is stored. If this
# property is empty, the cleaner will read the value from the
# /usr/etc/pnfsSetup file.
#
cleaner.trash.dir=


# ---- Book keeping directory (PNFS only)
#
# The directory used by the cleaner for book-keeping purposes. The
# directory must exist; the cleaner will not create it. The cleaner
# will create two directories beneath this directory: 'current' (for
# storing current activity) and 'archive' (for storing logs).
#
cleaner.book-keeping.dir = /opt/pnfsdb/pnfs/trash/2


# ---- Archive (PNFS only)
#
# Controls how the cleaner logs activity. Valid values for this
# property are 'log', 'zip' and 'none'. The log files are stored in
# the ${cleaner.book-keeping.dir}/archive directory.
#
# 'log' save uncompressed information in files with names like
# 'YYYY.MM.DD' (year, month and day).
#
# 'zip' save gzip-compressed information in files with names like
# '$removes.nnnnn.gz' where 'nnnnn' is the current Unix-time
# in milliseconds.
#
# 'none' don't save information.
#
cleaner.archive = none


# ---- HSM cleaner enabled
#
# If 'enabled' then the cleaner will instruct an HSM-attached pool to
# remove a deleted files' data stored in the HSM.
#
# To enable this feature, the property must be enabled at all the
# pools that are supposed to delete files from an HSM.
#
cleaner.hsm = disabled


# ---- Interval between flushing failures to the repository (PNFS only)
#
# Specifies the time, in seconds, between the cleaner flushing failed
# files to the repository. Between successive flushes, information
# about failed flushes is kept in memory.
#
# Each flush will create a new file. A lower value will cause the
# repository to be split into more files. A higher value will cause
# a higher memory usage and a larger number of files in the trash
# directory.
#
cleaner.hsm.flush.period = 60


# ---- HSM cleaner maximum requests
#
# As with messages send to pools to remove deleted files' data stored
# on the pool, the cleaner will group together multiple deleted files
# that had data stored on the HSM and that may be deleted by the same
# pool. The chimera.cleaner.hsm.max-files-in-message property places
# an upper limit on the number of files a message may contain.
#
cleaner.hsm.max-files-in-message = 100


# ---- HSM cleaner maximum concurrent requests (PNFS only)
#
# When the trash directory is scanned, information about deleted
# files is queued in memory. This variable specifies the maximum
# length of this queue. When the queue length is reached, scanning
# is suspended until files have been cleaned or flushed to the
# repository.
#
cleaner.hsm.max-concurrent-requests = 10000


# ---- HSM-deleting message communication timeout
#
# Files are cleaned from an HSM by the cleaner sending a message to
# an HSM-attached pool. The pool replies with a confirmation
# message. This property specifies the timeout the cleaner adopts
# while waiting for the reply, in seconds, after which the operation
# is considered to have failed.
#
# [In PNFS, the default is 100]
#
cleaner.hsm.pool-reply-timeout = 120


# ---- Location of trash directory for files on tape (PNFS only)
#
# The HSM cleaner periodically scans this directory to detect deleted
# files.
#
cleaner.hsm.trash.dir = /opt/pnfsdb/pnfs/1


# ---- Failed HSM removal storage (PNFS only)
#
# The HSM cleaner uses this directory to store information about
# files in could not clean right away. The cleaner will reattempt to
# clean the files later. If the directory doesn't exist then the
# cleaner will attempt to create it.
#
cleaner.hsm.repository.dir = /opt/pnfsdb/pnfs/1/repository




#
# Database related settings. Reserved for internal use.
#
cleaner/db.host=${chimera.db.host}
cleaner/db.name=${chimera.db.name}
cleaner/db.user=${chimera.db.user}
cleaner/db.password=${chimera.db.password}
cleaner/db.driver=${chimera.db.driver}
cleaner/db.url=${chimera.db.url}
cleaner/db.schema.auto=false



#
# Obsolete or Forbidden properties
#
(forbidden)cleanerArchive=adjust cleaner.archive property instead
(forbidden)cleanerDB=adjust cleaner.book-keeping.dir property instead
(forbidden)cleanerPoolTimeout=adjust cleaner.pool-reply-timeout property instead
(forbidden)cleanerProcessFilesPerRun=adjust cleaner.max-files-in-message property instead
(forbidden)cleanerRecover=adjust cleaner.pool-retry property instead
(forbidden)cleanerRefresh=adjust cleaner.period property instead
(forbidden)hsmCleaner=adjust cleaner.hsm property instead
(forbidden)hsmCleanerFlush = adjust cleaner.hsm.flush.period property instead
(forbidden)hsmCleanerRecover= adjust cleaner.pool-retry property instead
(forbidden)hsmCleanerRepository= adjust cleaner.hsm.repository.dir property instead
(forbidden)hsmCleanerRequest=adjust cleaner.hsm.max-files-in-message property instead
(forbidden)hsmCleanerScan=adjust cleaner.period property instead
(forbidden)hsmCleanerTimeout=adjust cleaner.hsm.pool-reply-timeout property instead
(forbidden)hsmCleanerTrash=adjust cleaner.hsm.trash.dir property instead
(forbidden)hsmCleanerQueue=adjust cleaner.hsm.max-concurrent-requests property instead
(forbidden)trash=adjust cleaner.trash.dir property instead


DCAP


# -----------------------------------------------------------------------
# Default values for DCAP doors
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for DCAP
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ----- Cell names for DCAP doors
#
authdcap/cell.name=DCap-${host.name}
dcap/cell.name=DCap-${host.name}
gsidcap/cell.name=DCap-gsi-${host.name}
kerberosdcap/cell.name=DCap-Kerberos-${host.name}


# ---- TCP port numbers for DCAP doors
#
dCapPort=22125
dcap/port=${dCapPort}
authdcap/port=${dCapPort}
dCapGsiPort=22128
gsidcap/port=${dCapGsiPort}
dCapKerberosPort=22725
kerberosdcap/port=${dCapKerberosPort}


dcapIoQueue=
dcapIoQueueOverwrite=denied
dcapMaxLogin=1500
dcapPasswdFile=/opt/d-cache/etc/passwd4dCapDoor
gsidcapIoQueue=
gsidcapIoQueueOverwrite=denied
gsidcapMaxLogin=1500
kerberosdcapIoQueue=
kerberosdcapIoQueueOverwrite=denied
kerberosdcapMaxLogin=1500
kerberosdcap/kerberos.service-principle-name=host/${host.fqdn}@${kerberos.realm}


# ---- Allow overwrite of existing files via GSIdCap
#
# allow=true, disallow=false
#
truncate=false


#
# Document which TCP ports are opened
#
(immutable)dcap/net.ports.tcp=${port}
(immutable)authdcap/net.ports.tcp=${port}
(immutable)gsidcap/net.ports.tcp=${port}
(immutable)kerberosdcap/net.ports.tcp=${port}




FTP


# -----------------------------------------------------------------------
# Default values for FTP doors
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for FTP
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ----- Cell names for FTP doors
#
gridftp/cell.name=GFTP-${host.name}
ftp/cell.name=FTP-${host.name}
kerberosftp/cell.name=KFTP-${host.name}


# ---- TCP port numbers for FTP doors
#
ftpPort=22126
ftp/port=${ftpPort}
kerberosFtpPort=22127
kerberosftp/port=${kerberosFtpPort}
gsiFtpPortNumber=2811
gridftp/port=${gsiFtpPortNumber}


# ---- GridFTP port range
#
# Do not change unless you know what you are doing.
#
clientDataPortRange=${net.wan.port.min}:${net.wan.port.max}


# Service Principle Names for the Kerberos doors. You shouldn't need
# to alter these.
#
kerberosftp/kerberos.service-principle-name=ftp/${host.fqdn}@${kerberos.realm}


# ---- Period between successive GridFTP performance markers
#
# This variable controls how often performance markers are written.
# The value is in seconds: set performanceMarkerPeriod to 180 to
# get performanceMarkers every 3 minutes. A value of 0 will
# disable performance markers.
#
performanceMarkerPeriod=70


# ---- PoolManager to use for FTP doors
#
# When empty a PoolManager is determined automatically based on
# other configuration parameters.
gsiftpPoolManager=


# ---- Pool proxy to use for FTP doors
(obsolete)gsiftpPoolProxy=


# ---- PoolManager timeout
#
# Specifies the timeout in seconds for communication with the
# PoolManager cell.
#
gsiftpPoolManagerTimeout=5400


# ---- Pool timeout
#
# Specifies the timeout in seconds for communication with the
# pool cells.
#
gsiftpPoolTimeout=600


# ---- PnfsManager timeout
#
# Specifies the timeout in seconds for communication with the
# PnfsManager cell.
#
gsiftpPnfsTimeout=300


# ---- How many times to retry pool selection
#
# If pool selection fails for some reason, the door may retry the
# operation. This setting specifies how many times to retry before
# the transfer fails.
#
gsiftpMaxRetries=80


# ---- Maximum number of concurrent streams to allow
#
# If a client creates more concurrent streams than allowed, the
# transfer will fail.
#
gsiftpMaxStreamsPerClient=10


# ---- Whether to delete files after upload failures
#
# When set to true, FTP doors delete files after upload failures.
#
gsiftpDeleteOnConnectionClosed=true


# ---- Limit on number of concurrent logins
#
# Specifies the largest number of simulatenous logins to allow to an
# FTP door.
#
gsiftpMaxLogin=100


# ---- Mover queue
#
# The mover queue on the pool to which FTP transfers will be
# scheduled. If blank, the default queue will be used.
#
gsiftpIoQueue=


# ---- What IP address to use for connections from pools to the FTP door
#
# FTP doors in some cases act as proxies for the transfer. This
# property specifies the IP of the interface on the door that the
# pool will connect to. If empty, the door will choose a local
# address. This address must not be a wildcard address.
#
gsiftpAdapterInternalInterface=


# ----- FTP transaction log directory
#
# When set, a log file per FTP session is created in this directory.
#
FtpTLogDir=


# ---- Whether passive FTP transfers are relayed by the door
#
# Passive FTP transfers are those where the client creates the data
# channel connection to the server.
#
# If this option is set to true, then all passive transfers are
# relayed by the FTP door. If this option is set to false, then the
# client is instructed to connect directly to the pool. This
# requires that the pool allows inbound connections. Even when set
# to false, there are several circumstances in which the connection
# cannot be established directly to the pool due to limitations in
# the FTP protocol. In such cases the connection will be relayed by
# the door.
#
# This setting is interpreted by both FTP doors and pools. For a
# given combination of door and pool, a direct connection to the
# pool can only be established if this setting is false at both the
# door and the pool.
#
ftp.proxy.on-passive=false


# ---- Whether active FTP transfers are relayed by the door
#
# Active FTP transfers are those where the server creates the data
# channel connection to the client.
#
# If this option is set to true, then all active transfers are
# relayed by the FTP door. If this option is set to false, then the
# pool connects directly to the client. This requires that the pool
# allows outbound connections. If the pool cannot establish the
# connection to the client, then the transfer will fail.
#
ftp.proxy.on-active=false


#
# Document which TCP ports are opened
#
(immutable)ftp/net.ports.tcp=${port} ${clientDataPortRange}
(immutable)kerberosftp/net.ports.tcp=${port} ${clientDataPortRange}
(immutable)gridftp/net.ports.tcp=${port} ${clientDataPortRange}


# ---- Obsolete properties
#
(obsolete)ftpBase=
(obsolete)spaceReservation=
(obsolete)spaceReservationStrict=
(forbidden)gsiftpAllowPassivePool=See ftp.proxy.on-passive for details
(forbidden)kerberosScvPrincipal=set kerberos.service-principle-name for the ftp service


# ---- Number of concurrent streams to use by default
#
# Default number of streams per client in mode E transfers. For
# compliance with GFD.21, this has to be 1. Therefore this property
# is deprecated.
#
(forbidden)gsiftpDefaultStreamsPerClient=value must be 1 for compliance with GFD.21



GPLAZMA


# -----------------------------------------------------------------------
# Default values for gPlazma configuration
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for gPlazma
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.




# -----------------------------------------------------------------------
# Common properties
# -----------------------------------------------------------------------
#
# gPlazma comes in two versions: the one available before 1.9.12
# (gPlazma-1) and the new gPlazma (gPlazma-2). This section contains
# options that apply equally to gPlazma-1 or gPlazma-2.
#


# ---- The gPlazma version to run
#
# Valid values are "1" and "2". Selecting "1" chooses gPlazma-1, the
# implementation of gPlazma available in dCache versions prior to
# 1.9.12. Selecting "2" chooses the new gPlazma.
#
gplazma.version = 1



# ---- Name of the gPlazma cell
#
# The name gPlazma will use when running. This name will be
# registered as well-known to other services. This is important if
# you wish to run multiple gPlazma cells as they will need different
# names.
#
gplazma/cell.name=gPlazma



# ---- Name used by doors
#
# The name of the gPlazma cell a door will contact. This becomes
# important when you have multiple gPlazma instances in a dCache
# system with different doors talking to different gPlazma cells.
#
gplazma=gPlazma



# ---- Number of concurrent requests to process.
#
# The number of login requests that gPlazma will process
# concurrently. Setting this number too high may result in large
# spikes of CPU activity and the potential to run out of memory.
# Setting the numebr too lower results in potentially slow login
# activity.
#
gPlazmaNumberOfSimultaneousRequests=30



# ---- Use gPlazma as a module
#
# This property takes one of two values: false or true. Specifying
# false has no effect. Enabling gPlazma as a module results in doors
# running their own gPlazma. This has the advantage of distributing
# the CPU load but requires additional maintenance. dCache as a
# module may be enabled on a per-door basis.
#
useGPlazmaAuthorizationModule=false



# ---- Run a centralised gPlazma service
#
# This property takes one of two values; false or true. Specifying
# true if dCache is to run a single centralised gPlazma. Doors may
# still be configured to use gPlazma module.
#
useGPlazmaAuthorizationCell=true




# -----------------------------------------------------------------------
# Properties for gPlazma-1
# -----------------------------------------------------------------------
#
# The following properties are for gPlazma-1, the version of gPlazma
# available in versions of dCache prior to 1.9.12
#


# ---- Location of the policy file
#
# The policy file provides overall control of how gPlazma behaves.
# It specifies which plugins are to be used, the plugins' relative
# priorities and also how these plugins behave.
#
gplazmaPolicy=${dcache.paths.etc}/dcachesrm-gplazma.policy




# -----------------------------------------------------------------------
# Properties for gPlazma-2
# -----------------------------------------------------------------------
#
# The following properties are for the version of gPlazma
# available with 1.9.12 and later.
#


# ---- Location of the configuration file
#
# The location of the gPlazma configuration file. This controls
# which plugins are used to authenticate end-users, in which order
# and how the plugins are configured.
#
gplazma.configuration.file=${dcache.paths.etc}/gplazma.conf



# -----------------------------------------------------------------------
# Properties for gPlazma 2 plugins
# -----------------------------------------------------------------------


# ---- Path of the grid-mapfile file
gplazma.gridmap.file=/etc/grid-security/grid-mapfile


# ---- Path of the storage-authzdb file
gplazma.authzdb.file=/etc/grid-security/storage-authzdb


# ---- Mapping order for determining the UID
#
# The storage-authzdb file maps names to UID, one or more GIDs, and a
# number of attributes.
#
# The authzdb plugin is typically used with other plugins and map
# user credentials to user and group names. Typical examples are
# gridmap (maps DN to user name) and vorolemap (maps FQAN to group
# name). The authzdb plugin maps both user names and group names to
# UID and GIDs.
#
# The authzdb plugin can be configured how it selects the mapping
# that determines the UID to use. The property is an ordered comma
# separated list of shortcuts of principal that are consulted to
# select among several possible mappings. The available principle
# shortcuts are:
#
# uid Some protocols (specifically DCAP) allow the client to specify
# a UID explicitly. The UID can be used to disambiguate between
# several available mappings. Note that a client provided UID is
# not in itself enough to authorize use of a mapping.
#
# login Some protocols (DCAP, FTP, among others) allow a login name
# to be specified in addition to regular X.509 or Kerberos
# authentication. The login name may be used to disambiguate
# between several available mappings. Note that a client
# provided login name is not in itself enough to authorize use
# of a mapping.
#
# user The authzdb plugin is always combined with other plugins,
# such as the gridmap plugin. Such plugins map may map to user
# names, which both authorize the use of a mapping in
# storage-authzdb and may determine the mapping being used.
#
# group The authzdb plugin is always combined with other plugins,
# such as the vorolemap plugin. Such plugins map may map to
# group names, which both authorize the use of a mapping in
# storage-authzdb and may determine the mapping being used. In
# this case the primary group name will determine the mapping
# from which the UID is taken.
#
# With the default setting tha set of candidate mappings (the
# mappings the user is authorized to use) is determined by the user
# and group names generated by other plugin (eg gridmap and
# vorolemap). To select one of the mappings, a user provided UID is
# consulted; if not avilable a user provided login name is consulted;
# if not available the mapping of a user name generated by another
# plugin is consulted (eg gridmap); if not available the mapping of a
# primary group name generated by another plugin is consulted (eg
# vorolemap).
#
# A typical reason to change the default is if one wants to give
# priority to the group name mapping rather than the user name
# mapping; Eg when combined with gridmap and vorolemap, changing this
# property to uid,login,group,user means that the primary group name
# as generated by vorolemap determines the UID and only if that is
# not available will the user name generated by gridmap be used.
#
gplazma.authzdb.uid=uid,login,user,group


# ---- Mapping order for determining the primary GID
#
# Similar to gplazma.authzdb.uid, but determines how the primary GID
# is selected. The same principal shortcuts are available, with the
# exception of uid; instead a user provided GID is consulted when the
# gid shortcut is used.
#
# A typical reason to change the default is if one wants to give
# priority to the user name mapping rather than the group name
# mapping; Eg when combined with gridmap and vorolemap, changing this
# property to gid,login,user,group means that the user name as
# generated by gridmap determines the primary GID and only if that is
# not available will the primary group name generated by vorolemap be
# used.
#
gplazma.authzdb.gid=gid,login,group,user


# ---- Path to the vomsdir directory
gplazma.vomsdir.dir=/etc/grid-security/vomsdir


# ---- Path to the directory containing trusted CA certificates
gplazma.vomsdir.ca=/etc/grid-security/certificates


# ---- Path to the grid-vorolemap file
gplazma.vorolemap.file=/etc/grid-security/grid-vorolemap


# ---- Password of the host key, if any
gplazma.argus.hostkey.password=


# ---- Path to the PEM encoded host key
gplazma.argus.hostkey=/etc/grid-security/hostkey.pem


# ---- Path to the PEM encoded host certificate
gplazma.argus.hostcert=/etc/grid-security/hostcert.pem


# ---- Path to the directory containing trusted CA certificates
gplazma.argus.ca=/etc/grid-security/certificates


# ---- Argus resource ID
gplazma.argus.resource=dcache


# ---- Argus action ID
gplazma.argus.action=access


# ---- Argus endpoint
gplazma.argus.endpoint=https://localhost:8154/authz


# ---- Path to kpwd file
gplazma.kpwd.file=${kpwdFile}


# ---- NIS server host
gplazma.nis.server=nisserv.domain.com


# ---- NIS domain name
gplazma.nis.domain=domain.com



# -----------------------------------------------------------------------
# Obsolete properties.
# -----------------------------------------------------------------------
#
# The following properties are no longer supported and have no
# effect.
#
(obsolete)gPlazmaRequestTimeout=
(obsolete)delegateToGPlazma=
(forbidden)gPlazmaNumberOfSimutaneousRequests=use gPlazmaNumberOfSimultaneousRequests instead



HTTPD


# -----------------------------------------------------------------------
# Default values for httpd
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for the httpd
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


httpd/cell.name=httpd


httpdPort=2288
httpd/port=${httpdPort}


# ---- Directory locations for dCache web interface
#
# The following variables specify the absolute location of static
# content exposed by the dCache-internal web server.
#
httpd.static-content.dir=${dcache.paths.share}/httpd/static
httpd.static-content.scripts=${httpd.static-content.dir}/scripts
httpd.static-content.docs=${httpd.static-content.dir}/docs
httpd.static-content.images=${httpd.static-content.dir}/images
httpd.static-content.styles=${httpd.static-content.dir}/styles
httpd.static-content.index=${httpd.static-content.dir}/index.html


#
# For billing history plots
#
httpd.static-content.plots=/var/lib/dcache/plots
httpd.static-content.plots.subdir=/plots


#
# Document which TCP ports are opened
#
(immutable)httpd/net.ports.tcp=${port}


# ---- Obsolete properties
#
(obsolete)httpdEnablePoolCollector=PoolCollector? is now always enabled
(forbidden)images=use httpd.static-content.images instead
(forbidden)styles=use httpd.static-content.styles instead



INFO-PROVIDER


# -----------------------------------------------------------------------
# info-provider default values
# -----------------------------------------------------------------------
#
# This properties file contains default values for dCache
# info-provider. All values can be redefined in etc/dcache.conf. Do
# not modify any values here as your changes will be lost when you
# next upgrade.
#
# The info-provider generates LDIF-formatted data conforming to the
# GLUE information model's LDAP bindings. It takes information from
# the XML data provided by the info service. The info-provider
# support script is run periodically by BDII to fetch data with
# which it keeps its information up-to-date.
#
# The info-provider requires both the info service and the internal
# dCache web server. It is possible to run BDII on a different
# machine from the dCache web server; however, the BDII machine must
# be able to access the web pages.
#


# ---- Single words or phrases that describe your site
#
# The following section has configuration that directly affects the
# output from the info-provider. Some properties affect only the
# GLUE v1.3 output, others only affect the GLUE 2.0 output and the
# remaining affect both. The affected versions of GLUE are
# indicated in square brackets.
#
# GlueSiteUniqueID [1.3, 2.0] a unique reference for your site.
# This must match the GlueSiteUniqueID defined in other services.
#
# The default value is not valid, so this property must be
# configured.
#
info-provider.site-unique-id=EXAMPLESITE-ID


# GlueSEUniqueID [1.3, 2.0] your dCache's Unique ID. Currently,
# this MUST be the FQDN of your SRM end-point.
#
# The default value is not valid, so this property must be
# configured.
#
info-provider.se-unique-id=dcache-srm.example.org


# GlueSEName [1.3, 2.0]: a human understandable name for your SE.
# It may contain spaces. You may leave this empty and a GlueSEName
# will not be published.
#
info-provider.se-name=


# GlueSEStatus [1.3]: current status of dCache. This should be one
# of the following values:
#
# Production the SE processes old and new requests according to
# its policies,
#
# Queuing the SE can accept new requests, but they will be
# kept on hold,
#
# Closed the SE does not accept new requests and does not
# process old requests,
#
# Draining the SE does not accept new request but still
# processes old requests.
#
# The default value is not valid, so this property must be
# configured.
#
# In practice, most sites will use 'Production' all the time.
#
info-provider.glue-se-status=UNDEFINEDVALUE


# Quality level [2.0]: is the "maturity of the service in terms of
# quality of the software components".
#
# This should be one of the following values (case is significant)
#
# development The component is under active development both
# in functionalities and interfaces.
#
# testing The component has completed the development
# phase and is under testing.
#
# pre-production The component has completed the development and
# passed the testing phase; it is being used in
# real world scenarios.
#
# production The component has completed the development and
# is considered stable for real world scenarios.
#
# The default value is not valid, so this property must be
# configured.
#
# In practice, most sites will use 'production' all the time.
#
info-provider.dcache-quality-level=UNDEFINEDVALUE


# GlueSEArchitecture [1.3]: the architecture of the underlying
# storage dCache is using. This should be one of the following
# values:
#
# disk non-robust, single-disk storage,
#
# multidisk disk-based storage that is robust against single disk
# failures,
#
# tape dCache has access to an HSM system.
#
# other reserved for other technologies, although setting this
# value is unlikely to be correct.
#
# The default value is not valid, so this property must be
# configured.
#
# In practice, most sites without HSM connectivity will use RAID
# disk pools, so 'multidisk' is appropriate. Those with an attached
# HSM should use 'tape'.
#
info-provider.dcache-architecture=UNDEFINEDVALUE


# DIT-PARENT [1.3]: A site will typically have multiple
# resource-level BDIIs, a single site-level BDII and zero or more
# top-level BDIIs. The site-level BDII is periodically updated with
# information from the various resource-level BDII.
#
# Most sites will deploy a resource-level BDII for each dCache
# instance. The output from running the info provider script is
# injected into this BDII. If the BDII is on a dCache head node
# then the BDII is very likely a resource-level BDII. This is the
# default for YAIM based installations.
#
# It is also possible to inject information directly into the
# site-level BDII. This removes the need to have a resource-level
# BDII; however, the site-level BDII requires LDIF that has a
# slightly different structure.
#
# If the LDIF output is for a resource-level BDII then the
# DIT-PARENT constant below should have the value 'resource'. If
# the LDIF output is for a site-level BDII then it should have the
# same value as the SITE-UNIQUE-ID constant above.
#
# If you are unsure, do not modify this property.
#
info-provider.dit-parent=resource




# ---- Location of tape accounting information
#
# The information about a site's tape usage that WLCG would like
# published cannot come from dCache, so must be supplied by the site.
#
# This info-provider expects that this information is in a separate
# file. Sites should write a small script that creates the file with
# up-to-date information. There is a description of this file's format
# inside the tape-info-empty.xml file.
#
# If you are a site with tape storage that is to be published,
# change the tape-info location value below to an appropriate
# location, such as:
#
# /var/opt/dcache/tape-info.xml
#
# Ensure that that file exists with up-to-date information.
#
# If you are a site without tape storage then simply leave this
# property alone.
#
info-provider.paths.tape-info=${dcache.paths.share}/xml/tape-info-empty.xml



# ---- Host that is running the web service
#
# The name of the machine that is running the dCache web server.
# This is used to build the URI for fetching dCache's current state.
# The port is defined elsewhere as httpdPort property.
#
info-provider.http.host = localhost


# ---- The GLUE versions that are published
#
# This property describes whether to publish GLUE v1.3 only, to
# publish GLUE v2.0 only, or to publish both GLUE v1.3 and GLUE
# v2.0. Acceptable values are '1.3', '2.0' or 'both'.
#
info-provider.publish = both


# ---- XSLT processor
#
# This property describes which XSLT processor to use. Acceptable
# values are 'xsltproc' and 'saxon'.
#
info-provider.processor = saxon


# ---- Site-specific configuration
#
# This property describes the location site-specific configuration.
# An example file is provided but this example needs to be carefully
# adjust.
#
info-provider.configuration.site-specific.location=${dcache.paths.etc}/info-provider.xml


# ---- Directory of LDAP transformation configuration
#
# This property describes in which directory the site-independent
# configuration files are stored.
#
info-provider.configuration.dir = ${dcache.paths.share}/info-provider


# ---- Filename of LDAP transformation configuration
#
# This variable provides the filename that describes how the XML
# should be transformed.
#
info-provider.configuration.file = glue-${info-provider.publish}.xml


info-provider.configuration.location = ${info-provider.configuration.dir}/${info-provider.configuration.file}


info-provider.xylophone.dir = ${dcache.paths.share}/xml/xylophone


info-provider.saxon.dir = ${dcache.paths.classes}/saxon


#
# Obsolete or Forbidden properties
#


(forbidden)httpPort=use httpdPort instead.
(forbidden)httpHost=use info-provider.http.host instead
(forbidden)xsltProcessor=use info-provider.processor instead
(forbidden)xylophoneConfigurationFile = use info-provider.configuration.file instead
(forbidden)saxonDir = use info-provider.saxon.dir instead
(forbidden)xylophoneXSLTDir = use info-provider.xylophone.dir instead
(forbidden)xylophoneConfigurationDir = use info-provider.configuration.dir instead



INFO


# -----------------------------------------------------------------------
# Default values for info service
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for the info service
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.



#
# Document which TCP ports are opened
#
(immutable)info/net.ports.tcp = 22112



NFS


# -----------------------------------------------------------------------
# Default values for nfsv41
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for nfsv41
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- Mover queue
#
# The mover queue on the pool to which this request will be
# scheduled.
#
nfsIoQueue=



#
# TCP port number of NFS door
#
nfs.port=2049



#
# Enable NFSv3 service inside v4.1 door
#
nfs.v3=false



#
# The local NFSv4 domain name. An NFSv4 domain is a namespace
# with a unique username<->UID and groupname<->GID mapping.
#
nfs.domain=



#
# The username<->UID and groupname<->GID mapping results are cached to improve
# NFS interface performance. The following value allowes to tweak caching timeout.
#


# maximal number of entries in the cache
nfs.idmap.cache.size = 512


# cache entry maximal lifetime
nfs.idmap.cache.timeout = 30


# time unit used for timeout. Valid values are:
# SECONDS, MINUTES, HOURS and DAYS
nfs.idmap.cache.timeout.unit = SECONDS


#
# enable RPCSEC_GSS
#
nfs.rpcsec_gss = false



#
# Document which TCP ports are opened
#
(immutable)nfs/net.ports.tcp=(111) ${nfs.port}
(immutable)nfs/net.ports.udp=(111)



NFSV3


# -----------------------------------------------------------------------
# Default values for nfsv3
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for nfsv3
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


nfsv3/cell.name=NFSv3-${host.name}


#
# Database related settings reserved for internal use.
#
nfsv3/db.host=${chimera.db.host}
nfsv3/db.name=${chimera.db.name}
nfsv3/db.user=${chimera.db.user}
nfsv3/db.password=${chimera.db.password}
nfsv3/db.driver=${chimera.db.driver}
nfsv3/db.url=${chimera.db.url}
nfsv3/db.schema.auto=false


#
# Document which TCP ports are opened
#
(immutable)nfsv3/net.ports.tcp=(111) 2049
(immutable)nfsv3/net.ports.udp=(111) 2049



PINMANAGER


# -----------------------------------------------------------------------
# Default values for pinmanager
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for pinmanager
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- Cell name of pin manager service
#
pinmanager/cell.name=PinManager?


# ---- Pin Manager Database Host
#
# NB. If the srmDatabaseHost variable is set and the following
# variable is not then the value of srmDatabaseHost is used.
#
# Do not change unless you know what you are doing.
#
pinManagerDbHost=${srmDatabaseHost}


# ---- Pin Manager Database Name
#
# NB. If the srmDbName variable is set and the following variable
# is not then the value of srmDbName is used.
#
# Do not change unless you know what you are doing.
#
pinManagerDbName=${srmDbName}


# ---- Pin Manager Database User
#
# NB. If the srmDbUser variable is set and the following variable
# is not then the value of srmDbUser is used.
#
# Do not change unless you know what you are doing.
#
pinManagerDbUser=${srmDbUser}


# ---- Pin Manager Database Host
#
# NB. If the srmDbPassword variable is set and the following
# variable is not then the value of srmDbPassword is used.
#
# Do not change unless you know what you are doing.
#
pinManagerDbPassword=${srmDbPassword}


# ---- Pin Manager Database Host
#
# NB. If the srmDbPasswordFile variable is set and the following
# variable is not then the value of srmDbPasswordFile is used.
#
# Do not change unless you know what you are doing.
#
# - Database name: dcache
#
pinManagerPasswordFile=${srmPasswordFile}


# ---- Pin Manager Maximum Number of Database connections
#
# Do not change unless yoy know what you are doing.
#
pinManagerMaxActiveJdbcConnections=50


# ---- Pin Manager Maximum Number of seconds to wait for the
# connections before returning an error
#
# Do not change unless you know what you are doing.
#
pinManagerMaxJdbcConnectionsWaitSec=30


# ---- Pin Manager Maximum Number of Idle Database connections
#
# Do not change unless yoy know what you are doing.
#
pinManagerMaxIdleJdbcConnections=10


# in seconds, -1 for infinite
pinManagerMaxPinDuration=-1


#
# Database related settings. Currently reserved for internal use.
#
pinmanager/db.host=${pinManagerDbHost}
pinmanager/db.name=${pinManagerDbName}
pinmanager/db.user=${pinManagerDbUser}
pinmanager/db.password=${pinManagerDbPassword}
pinmanager/db.password.file=${pinManagerPasswordFile}
pinmanager/db.driver=org.postgresql.Driver
pinmanager/db.url=jdbc:postgresql://${db.host}/${db.name}
pinmanager/db.schema.changelog=org/dcache/pinmanager/model/db.changelog-master.xml



PNFSMANAGER


# -----------------------------------------------------------------------
# Default values for pnfsmanager
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for pnfsmanager
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


pnfsmanager/cell.name=PnfsManager


# ---- pnfs Mount Point
#
# The mount point of pnfs on the admin node. The default value is:
# /pnfs/fs
#
pnfs=/pnfs/fs


# ---- Default pnfs server
#
# An older version of the pnfsManager actually autodetects the
# possible pnfs filesystems. The defaultPnfsServer variable is
# choosen from the list and used as primary pnfs filesystem.
# Currently the others are ignored. The pnfs variable can be used
# to override this mechanism.
#
defaultPnfsServer=localhost


# -- leave this unless you are running an enstore HSM backend.
#
pnfsInfoExtractor=diskCacheV111.util.OsmInfoExtractor?


# -- replace with org.dcache.chimera.namespace.ChimeraEnstoreStorageInfoExtractor?
# if you are running an enstore HSM backend.
#
hsmStorageInfoExtractor=org.dcache.chimera.namespace.ChimeraOsmStorageInfoExtractor?


# ---- Number of threads per thread group
#
# Depending on how powerful your pnfs server host is you may set
# this to up to 50.
#
pnfsNumberOfThreads=4


# ---- Number of cache location threads
#
# The number of threads used for cache location updates and
# lookups. If 0 then the regular pnfs manager thread queues are
# used for cache location lookups. If non-zero then dedicated
# threads for cache location operations are created.
#
pnfsNumberOfLocationThreads=0


# ---- Number of thread groups
#
# A PNFS tree may be split into multiple databases. Each database is
# single threaded and hence accessing the same database from
# multiple threads provides only a minor speed-up. To ensure good
# load balancing when using multiple databases, the PnfsManager
# supports thread groups. Any database is assigned to one and only
# one thread group, thus databases assigned to different thread
# groups are guaranteed not to block each other. Each thread group
# will have $pnfsNumberOfThreads threads.
#
# For best performance isolation, set this to be equal the largest
# database ID defined in PNFS. When increasing
# pnfsNumberOfThreadGroups, you may want to lower
# pnfsNumberOfThreads.
#
# Notice that PNFS access is still subject to the number of threads
# created in the PNFS daemon. If this number is lower than the
# number of concurrent requests, then contention may still occur
# even though multiple databases are used.
#
pnfsNumberOfThreadGroups=1


# ---- Number of list threads
#
# The PnfsManager uses dedicated threads for directory list
# operations. This variable controls the number of threads to
# use.
#
pnfsNumberOfListThreads=2


# ---- Max chunk size in list replies
#
# To avoid out of memory errors when listing large directories,
# PnfsManager breaks up directory listings in chunk of entries. This
# setting controls the maximum number of directory entries in a
# chunk.
#
pnfsListChunkSize=100


# ---- Threshold for when to log slow requests
#
# Threshold in milliseconds for when to log slow requests. Requests
# with a processing time larger than the threshold are logged. Set
# to 0 to disable. This can also be enabled at runtime using the
# 'set log slow threshold' command.
#
pnfsLogSlowThreshold=0


# ---- Maximum number of requests in a processing queue
#
# PnfsManager maintains a request queue per processing thread. This
# setting specifies the queue length at which point new requests
# will be denied rather than enqueued for processing. Set to 0 for
# unlimitted queues.
#
pnfsQueueMaxSize=0


# ---- Database configuration (only relevant with PNFS backend)
#
# Only change these variables if you have configured you PostGreSQL
# instance other than as recommended in the dCache Book.
#
pnfsDbUser=srmdcache
pnfsDbPassword=srmdcache
pnfsPasswordFile=



# ---- PnfsManager message folding
#
# Whether to use message folding in PnfsManager. When message folding
# is enabled, the PnfsManager will try to fold or collapse processing of
# identical messages. This can reduce the load on PNFS or Chimera if a
# large number of simulatenous requests on the same objects are performed.
#
pnfsFolding=false


# ---- Inherit file ownership when creating files and directories
#
# By default new files and directories receive will be owned by the
# person who created the files and directories. The owner field will
# be the UID of the creator and the group field will be the primary
# GID of the creator.
#
# If this flag is set to true, then both the owner and the group
# field will inherit the values from the parent directory.
#
# In either case, a door may override the values with values
# provided by the user.
#
pnfsInheritFileOwnership=false


# ---- Whether to verify lookup permissions for the entire path
#
# For performance reasons dCache with PNFS only verified the lookup
# permissions of the directory containing the file system entry
# corresponding to the path. Ie only the lookup permissions for the
# last parent dirctory of the path were enforced. For compatibility
# reasons Chimera inherited these semantics.
#
# When this option is set to true, Chimera will verify the lookup
# permissions of all directories of a path.
#
pnfsVerifyAllLookups=false


# ---- Storage for cacheinfo (only relevant with PNFS backend)
#
# This variable defines where cacheinfo is to be stored.
#
# Valid values are:
# companion
# pnfs
#
# The default value is:
# pnfs
#
# If 'companion' is specified then the cacheinfo will be stored in a
# separate database. If 'pnfs' is specified, then cacheinfo will
# be stored in pnfs.
#
# For new installations, 'companion' is recommended.
#
# For existing installations that store cacheinfo in pnfs must use
# 'pnfs register' on every pool after switching from 'pnfs' to
# 'companion'. See the documentation for more details.
#
cacheInfo=companion


# ---- Name of host holding the companion database
#
# Only used when dcache.namespace is pnfs and cacheInfo is companion.
#
companionDatabaseHost=localhost


# ---- Default Access Latency and Retention Policy
#
# These variables affect only newly created files.
#
# The valid values are:
# AccessLatency? : NEARLINE, ONLINE
# RetentionPolicy?: CUSTODIAL, REPLICA, OUTPUT
# However, do not use OUTPUT.
#
DefaultRetentionPolicy?=CUSTODIAL
DefaultAccessLatency?=NEARLINE



#
# Database related settings reserved for internal use.
#
pnfsmanager/db.host=${chimera.db.host}
pnfsmanager/db.name=${chimera.db.name}
pnfsmanager/db.user=${chimera.db.user}
pnfsmanager/db.password=${chimera.db.password}
pnfsmanager/db.driver=${chimera.db.driver}
pnfsmanager/db.url=${chimera.db.url}
pnfsmanager/db.schema.auto=false


#
# Old properties
#
(forbidden)namespaceProvider=Use dcache.namespace instead



POOL


# -----------------------------------------------------------------------
# Default values for pools
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for pool
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- Name of pool cell
#
# Currently this has to be the same as the pool name.
#
pool/cell.name=${name}


# ---- Port used for passive DCAP movers
#
# When zero then a random port from the LAN port range is used.
#
pool.dcap.port=0


# ---- Mover queues to create on a pool
#
# Mover queues schedule and execute transfers on a pool. Each mover
# queue can have individual limits and timeouts. These are
# configured at runtime through the admin interface.
#
# Doors can be configured to submit transfers to named queues.
#
# This property takes a comma separated list of named mover queues.
# The default mover queue is called 'regular' and is always created.
# The 'regular' mover queue must not be defined in this property.
#
# Named queues with names that begin with a hyphen are processed in
# LIFO order; all other queues are process in FIFO order.
#
poolIoQueue=


# ---- Whether to monitor pool health
#
# If true, then the pool periodically performs a number of health
# checks and disables itself if an error is detected.
#
checkRepository=true


# ---- Do not start the pool until specified paths exists.
#
# If specified then pool startup procedure will block as long as
# specified paths does not exists. This is useful to delay pool startup
# until repository's directory is available.
#
# Format: [path1][:path2]...[:pathN]
# For example:
# waitForFiles=${path}/data
#
waitForFiles=


# ----- Whether to request file replication for new files
#
# If enabled, the pool requests that PoolManager replicates new
# files
#
replicateOnArrival=off


# ----- Whether to use memory mapping in FTP mover
#
# If true, the FTP mover utilizes memory mapping for checksum
# verification. This potentially improves performance, but may cause
# compatibility issues on some platforms.
#
gsiftpAllowMmap=false


# ----- Distance between transfer and checksum computation in FTP mover
#
# When the checksum is computed on the fly, the FTP mover performs
# the checksum calculation in a separate thread. This property
# indicates the maximum distance in bytes between the transfer and
# the checksum calculation. If too small then the transfer may be
# throttled by a slow checksum calculation. If too large then data
# may have to be read back from disk rather than read from the
# cache.
#
gsiftpReadAhead=16777216


# ---- Allow pool to remove precious files on request from cleaner.
#
# This option is respected only when lfs=none. If lfs=precious then
# removal of precious files is always allowed.
#
allowCleaningPreciousFiles=true


# ---- Which meta data repository implementation to use.
#
# Valid values are:
# org.dcache.pool.repository.meta.file.FileMetaDataRepository?
# org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
#
# This selects which meta data repository implementation to use.
# This is essentially a choice between storing meta data in a large
# number of small files in the control/ directory, or to use the
# embedded Berkeley database stored in the meta/ directory. Both
# directories are within the pool directory.
#
metaDataRepository=org.dcache.pool.repository.meta.file.FileMetaDataRepository?


# ---- Which meta data repository to import from.
#
# Valid values are:
# org.dcache.pool.repository.meta.file.FileMetaDataRepository?
# org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
# org.dcache.pool.repository.meta.EmptyMetaDataStore?
#
# This variable selects which meta data repository to import data
# from if the information is missing from the main repository. This
# is useful for converting from one repository implementation to
# another, without having to fetch all the information from the
# central PnfsManager.
#
metaDataRepositoryImport=org.dcache.pool.repository.meta.EmptyMetaDataStore?


# ---- The cell to notify after a file was flushed to tape
flushMessageTarget=broadcast


# ---- Garbage collector used when the pool runs out of space
sweeper=org.dcache.pool.classic.SpaceSweeper2


# ---- The cell to notify with poolup messages
poolupDestination=broadcast


# ---- Thread pool size for xrootd disk IO threads
xrootdMoverDiskThreads=20


# ---- Thread pool size for xrootd socket IO threads.
#
# If unset the number of CPU cores in the host is used as a default.
#
xrootdMoverSocketThreads=


# ---- Amount of memory to use for buffering per xrootd connection
#
# Specified in bytes.
#
xrootdMoverMaxMemoryPerConnection=16777216


# ---- Total amount of memory to use for buffering for xrootd connections
#
# Specified in bytes.
#
xrootdMoverMaxMemory=67108864


# ---- Maximum size of an xrootd frame
#
# Specified in bytes.
#
xrootdMoverMaxFrameSize=2097152


# ---- Thread pool size for http disk IO threads
httpMoverDiskThreads=20


# ---- Thread pool size for http socket IO threads
#
# If unset the number of CPU cores in the host is used as a default.
#
httpMoverSocketThreads=


# ---- Amount of memory to use for buffering per http connection
#
# Specified in bytes
#
httpMoverConnectionMaxMemory=16777216


# ---- Total amount of memory to use for buffering for http connections
#
# Specified in bytes.
#
httpMoverMaxMemory=67108864


# ---- Max chunk size in bytes for received chunked HTTP packets
#
# This setting affects the maximum frame size of chunked HTTP packets
# received by the HTTP mover from a remote peer.
# Setting this value too high can have memory impacts, setting it too low
# can mean that longer messages won't be accepted by the mover.
#
httpMoverMaxChunkSize=1048576


# ----- Chunk size in bytes for chunked HTTP packages sent by the server
httpMoverChunkSize=8192


# ---- HTTP client timeout
#
# Period in seconds after which a client will be disconnected if the
# connection is idle (not reading or writing)
#
httpMoverClientIdleTimeout=300


# ---- HTTP connect timeout
#
# Timeout in seconds that the mover will wait for a client connect before
# shutting down
#
httpMoverConnectTimeout=300


# ---- Large File Store
#
# Legacy option for disk only pools. There is usually no need to
# change this setting as the choice whether to write a file to tape
# is now controlled by the retention policy of each file.
#
lfs=none


# ---- Maximum amount of space to be used by a pool
#
# In bytes or 'Infinity'. May also be configured at runtime through
# the admin interface. If 'Infinity', then the pool size is
# determined from the size of the file system.
#
maxDiskSpace=Infinity


# ---- Pool tags
#
# White space separated list of key value pairs to associate with a
# pool.
#
tags=hostname=${host.name}



#
# Document which TCP ports are opened
#
(immutable)pool/net.ports.tcp=${net.wan.port.min}-${net.wan.port.max} ${net.lan.port.min}-${net.wan.port.max}



# ---- Obsolete and forbidden properties
#
(forbidden)waitForRepositoryReady=Use waitForFiles instead
(obsolete)removeUnexistingEntriesOnFlush=pool will always check if file \

still exists before flushing; non-existing entries are then \
deleted



POOLMANAGER


# -----------------------------------------------------------------------
# Default values for pool manager
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for pool manager
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


poolmanager/cell.name=PoolManager


selectionUnit=diskCacheV111.poolManager.PoolSelectionUnitV2
costCalculator=diskCacheV111.pools.CostCalculationV5
threadPool=diskCacheV111.util.ThreadPoolNG
quotaManager=none
poolStatusRelay=broadcast


# ---- Setup file for PoolManager
#
# Must be writeable by user ${dcache.user} for the 'save' command of
# PoolManager to work.
#
poolmanager.setup.file=/var/lib/dcache/config/poolmanager.conf



REPLICA


# -----------------------------------------------------------------------
# Default values for replica
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for replica
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.



# To activate Replica Manager you need make changes in 3 places:
# 1) you need to run the replica service somewhere in your
# dCache installation by enabling it in a layout file
# 2) configure the service in etc/dcache.conf file on node where
# the replica service is running
# 3) define Resilient pool group(s) in PoolManager.conf on the host
# running the poolmanager service


# ---- Cell name of Replica Manager
#
replica/cell.name=replicaManager


# ---- Will Replica Manager be started?
#
# Values: no, yes
#
# This has to be set to 'yes' on every node, if there is a replica
# manager in this dCache instance. Where the replica manager is
# started is controlled in 'etc/node_config'. If it is not started
# and this is set to 'yes' there will be error messages in
# log/dCacheDomain.log. If this is set to 'no' and a replica
# manager is started somewhere, it will not work properly.
#
replicaManager=no


# ---- Which pool-group will be the group of resilient pools?
#
# Values: <pool-Group-Name>, a pool-group name existing in the PoolManager.conf
#
# Only pools defined in pool group ResilientPools? in
# config/PoolManager.conf will be managed by ReplicaManager. You
# must edit config/PoolManager.conf to make the replica manager
# work. To use another pool group defined in PoolManager.conf for
# replication, please specify group name by changing this setting.
#
resilientGroupName=ResilientPools?


#
# Replica Manager database settings
#


replicaManagerDatabaseHost=localhost
replicaDbName=replicas
replicaDbUser=srmdcache
replicaDbPassword=srmdcache
replicaPasswordFile=
replicaDbJdbcDriver=org.postgresql.Driver


replicaPoolWatchDogPeriod=600
replicaWaitDBUpdateTimeout=600
replicaExcludedFilesExpirationTimeout=43200
replicaDelayDBStartTimeout=1200
replicaAdjustStartTimeout=1200
replicaWaitReplicateTimeout=43200
replicaWaitReduceTimeout=43200
replicaDebug=false
replicaMaxWorkers=6
replicaMin=2
replicaMax=3
replicaCheckPoolHost=true
replicaEnableSameHostReplica=false



#
# Database related settings reserved for internal use.
#
replica/db.host=${replicaManagerDatabaseHost}
replica/db.name=${replicaDbName}
replica/db.user=${replicaDbUser}
replica/db.password=${replicaDbPassword}
replica/db.driver=${replicaDbJdbcDriver}
replica/db.url=jdbc:postgresql://${db.host}/${db.name}
replica/db.schema.auto=false



SPACEMANAGER


# -----------------------------------------------------------------------
# Default values for spacemanager
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for spacemanager
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- Cell name of space manager
#
# This property defines the well known cell name of the space
# manager service.
#
spacemanager/cell.name=SrmSpaceManager


# ---- Service name of space manager
#
# This property defines the cell address other cells talk to in
# order to contact the space manager service.
#
spacemanager=SrmSpaceManager


# ---- Whether the SRM Space Manager should be enabled.
#
# The value must be consistent across these services: gridftp,
# gsidcap, kerberosdcap, kerberosftp, spacemanager, srm,
# transfermanagers, xrootd, webdav.
#
srmSpaceManagerEnabled=no


# ---- Default access latency
#
# Default access latency used if space reservation request does not
# specify one.
#
DefaultAccessLatencyForSpaceReservation?=${DefaultAccessLatency?}


# ---- Reserve space for non SRM transfers.
#
# If the transfer request comes from the door and there was no
# prior space reservation made for this file, should we try to
# reserve space before satisfying the request?
#
SpaceManagerReserveSpaceForNonSRMTransfers=false


# ---- Location of LinkGroupAuthorizationFile?
#
# The LinkGroupAuthorizationFileName? file contains the list of VOMS
# FQANs that are allowed to make space reservations within a given
# link group.
#
SpaceManagerLinkGroupAuthorizationFileName?=



#
# Database related settings reserved for internal use.
#
spacemanager/db.host=${srmDatabaseHost}
spacemanager/db.name=${srmDbName}
spacemanager/db.user=${srmDbUser}
spacemanager/db.password=${srmDbPassword}
spacemanager/db.driver=org.postgresql.Driver
spacemanager/db.url=jdbc:postgresql://${db.host}/${db.name}
spacemanager/db.schema.auto=true


#
# Obsolete or Forbidden properties
#


(forbidden)SpaceManagerDefaultAccessLatency?=use DefaultAccessLatencyForSpaceReservation? instead



SRM


# -----------------------------------------------------------------------
# Default values for srm
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for srm
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes will
# be lost when you next upgrade.


# ---- Cell name of srm service
#
srm/cell.name=SRM-${host.name}


# ---- TCP Port
#
srmPort=8443
srm/port=${srmPort}


# ---- Host name of srm service
#
# For certain operations srm needs to know its domain name. The
# property "srmHost" can be used to override the default value. If
# this value is not set, the value is detected automatically and it is
# equivalent to the output of the unix hostname program.
#
srmHost=${host.fqdn}


# ---- Host names of srm services in this deployment
#
# A host part of the source url (surl) is used to determine if the
# surl references file in this storage system. In case of the copy
# operation, srm needs to be able to dinstinguish between the local
# surl and the remote one. Also srm needs to refuse to perform
# operations on non local srm urls. localSrmHosts is a comma separated
# list of hosts that will be considered local by this srm service.
# This parameter might need to be defined as a list because in case of
# the multihomed or distributed server it may have more than one
# network name. If localSrmHosts is not specified, srmHost will be
# used
#
localSrmHosts=${srmHost}


# ---- Transport layer encryption
#
# The security transport to use. The possible values are SSL or GSI.
# GSI (Grid Security Infrastructure) is the commonly deployed
# protocol, but SSL is the industrial standard.
#
srmGssMode=GSI


# ---- Client side transport layer encryption
#
# The transport used when contacting remote SRM instances. This is
# only used for third-party copies (srmCopy).
#
srmClientTransport=GSI


# ---- Connector that should be used in the srm service
#
# srmJettyConnectorType can be either sync or async. async means that
# a non-blocking socket is used, while sync means that a blocking
# socket is used.
#
# Use the async setting if you expect a very large number of
# simultaneous SRM connections.
#
srmJettyConnectorType=sync


# ---- Database host name
#
srmDatabaseHost=localhost


# ---- Database name
srmDbName=dcache


# ---- Database user name
srmDbUser=srmdcache


# ---- Database password
srmDbPassword=srmdcache


# ---- Database password file
srmPasswordFile=


# ---- Log to database
#
# If set to true, the transfer services log transfers to the srm
# database.
#
srmDbLogEnabled=false


# ---- Authorization cache lifetime
#
# Time in seconds to cache gPlazma authorization information.
#
srmAuthzCacheLifetime=180


# ---- TCP streams to use for GridFTP transfer
#
# The number of concurrent TCP streams used by srmCopy controlled
# GridFTP tranfers.
#
parallelStreams=10


# ---- Timeout of the external srmCopy script
#
# Timeout in seconds, how long to wait for the completion of the
# transfer via external client, should the external client be used
# for the MSS to MSS transfers.
#
srmTimeout=3600


# ---- Buffer size used for srmCopy transfer
#
# Specified in bytes.
#
srmBufferSize=1048576


# ---- TCP buffer size used for srmCopy transfer
#
# Specified in bytes.
#
srmTcpBufferSize=1048576


# ---- Controls debug functionality of the external srmCopy script
#
srmDebug=true


# ---- Threads that accept TCP connections
srmJettyConnectorAcceptors=1


# ---- Milliseconds before an idle TCP connection is closed
srmJettyConnectorMaxIdleTime=60000


# ---- Milliseconds before an idle TCP connection is closed during high load
srmJettyConnectorLowResourceMaxIdleTime=20000


# ---- TCP backlog for SRM connections
srmJettyConnectorBackLog=1024


# ---- Maximum number of threads used for SRM request processing
#
# Whenever a client submits an SRM request a thread is allocated. This
# setting controls the maximum number of such threads.
#
# Notice that this does not control the number of SRM transfers that
# can be active at any given time: An SRM transfer involves several
# requests by the client (eg srmPrepareToGet, srmStatusOfGetRequest,
# srmReleaseFiles).
#
# There is also a choice whether to process requests synchronously or
# asynchronously. If processed synchronously, the request is not
# answered until processed. This means that a thread is bound to the
# request for the duration of the request processing. If prcessed
# asynchronous, the thread is released right away and the client
# submits new requests to poll for the completion of the previously
# submitted request. This adds latency and increases authentication
# overhead, but frees thread and TCP connections.
#
srmJettyThreadsMax=500


# ---- Minimum number of threads used for SRM request processing
srmJettyThreadsMin=10


# ---- Milliseconds before an idle requst processing thread is terminated
srmJettyThreadsMaxIdleTime=30000


# ---- Maximum number of queued SRM requests
#
# Once the limit is reached no new connections will be accepted;
# instead, the operating system will queue them in the TCP backlog.
# Once the TCP backlog is filled, the operating system will reject
# further TCP connections.
#
srmJettyThreadsMaxQueued=500


srmGetReqThreadQueueSize=10000
srmGetReqThreadPoolSize=250
srmGetReqMaxWaitingRequests=1000
srmGetReqReadyQueueSize=10000
srmGetReqMaxReadyRequests=2000
srmGetReqMaxNumberOfRetries=10
srmGetReqRetryTimeout=60000
srmGetReqMaxNumOfRunningBySameOwner=100
srmGetDatabaseEnabled=${srmDatabaseEnabled}
srmGetCleanPendingRequestsOnRestart=${srmCleanPendingRequestsOnRestart}
srmGetKeepRequestHistoryPeriod=${srmKeepRequestHistoryPeriod}
srmGetExpiredRequestRemovalPeriod=${srmExpiredRequestRemovalPeriod}
srmGetRequestHistoryDatabaseEnabled=${srmRequestHistoryDatabaseEnabled}
srmGetStoreCompletedRequestsOnly=${srmStoreCompletedRequestsOnly}



# ---- Milliseconds until get requests are processed asynchronously
#
# Some SRM operations may be processed synchronously or
# asynchronously, at the server's discretion. dCache can start to
# process such requests synchronously and, if this is taking too long,
# reply asynchronously and continue to work on the operation
# background.
#
# This setting specifies the time in milliseconds after which get
# requests are handled asynchronously. Set to 'infinity' to disable
# asynchronous processing.
#
# Asynchronous processing avoids holding TCP connections to the server
# while the request is processed.
#
srmGetReqSwitchToAsynchronousModeDelay=1000


srmBringOnlineReqThreadQueueSize=${srmGetReqThreadQueueSize}
srmBringOnlineReqThreadPoolSize=${srmGetReqThreadPoolSize}
srmBringOnlineReqMaxWaitingRequests=${srmGetReqMaxWaitingRequests}
srmBringOnlineReqReadyQueueSize=${srmGetReqReadyQueueSize}
srmBringOnlineReqMaxReadyRequests=${srmGetReqMaxReadyRequests}
srmBringOnlineReqMaxNumberOfRetries=${srmGetReqMaxNumberOfRetries}
srmBringOnlineReqRetryTimeout=${srmGetReqRetryTimeout}
srmBringOnlineReqMaxNumOfRunningBySameOwner=${srmGetReqMaxNumOfRunningBySameOwner}
srmBringOnlineDatabaseEnabled=${srmDatabaseEnabled}
srmBringOnlineCleanPendingRequestsOnRestart=${srmCleanPendingRequestsOnRestart}
srmBringOnlineKeepRequestHistoryPeriod=${srmKeepRequestHistoryPeriod}
srmBringOnlineExpiredRequestRemovalPeriod=${srmExpiredRequestRemovalPeriod}
srmBringOnlineRequestHistoryDatabaseEnabled=${srmRequestHistoryDatabaseEnabled}
srmBringOnlineStoreCompletedRequestsOnly=${srmStoreCompletedRequestsOnly}


# ---- Milliseconds until bring online requests are processed asynchronously
#
# Some SRM operations may be processed synchronously or
# asynchronously, at the server's discretion. dCache can start to
# process such requests synchronously and, if this is taking too long,
# reply asynchronously and continue to work on the operation
# background.
#
# This setting specifies the time in milliseconds after which bring
# online requests are handled asynchronously. Set to 'infinity' to
# disable asynchronous processing.
#
# Asynchronous processing avoids holding TCP connections to the server
# while the request is processed.
#
srmBringOnlineReqSwitchToAsynchronousModeDelay=${srmGetReqSwitchToAsynchronousModeDelay}



srmPutReqThreadQueueSize=10000
srmPutReqThreadPoolSize=250
srmPutReqMaxWaitingRequests=1000
srmPutReqReadyQueueSize=10000
srmPutReqMaxReadyRequests=1000
srmPutReqMaxNumberOfRetries=10
srmPutReqRetryTimeout=60000
srmPutReqMaxNumOfRunningBySameOwner=100
srmPutDatabaseEnabled=${srmDatabaseEnabled}
srmPutCleanPendingRequestsOnRestart=${srmCleanPendingRequestsOnRestart}
srmPutKeepRequestHistoryPeriod=${srmKeepRequestHistoryPeriod}
srmPutExpiredRequestRemovalPeriod=${srmExpiredRequestRemovalPeriod}
srmPutRequestHistoryDatabaseEnabled=${srmRequestHistoryDatabaseEnabled}
srmPutStoreCompletedRequestsOnly=${srmStoreCompletedRequestsOnly}


# ---- Milliseconds until put requests are processed asynchronously
#
# Some SRM operations may be processed synchronously or
# asynchronously, at the server's discretion. dCache can start to
# process such requests synchronously and, if this is taking too long,
# reply asynchronously and continue to work on the operation
# background.
#
# This setting specifies the time in milliseconds after which put
# requests are handled asynchronously. Set to 'infinity' to disable
# asynchronous processing.
#
# Asynchronous processing avoids holding TCP connections to the server
# while the request is processed.
#
srmPutReqSwitchToAsynchronousModeDelay=1000


srmCopyReqThreadQueueSize=10000
srmCopyReqThreadPoolSize=250
srmCopyReqMaxWaitingRequests=1000
srmCopyReqMaxNumberOfRetries=10
srmCopyReqRetryTimeout=60000
srmCopyReqMaxNumOfRunningBySameOwner=100
srmCopyDatabaseEnabled=${srmDatabaseEnabled}
srmCopyCleanPendingRequestsOnRestart=${srmCleanPendingRequestsOnRestart}
srmCopyKeepRequestHistoryPeriod=${srmKeepRequestHistoryPeriod}
srmCopyExpiredRequestRemovalPeriod=${srmExpiredRequestRemovalPeriod}
srmCopyRequestHistoryDatabaseEnabled=${srmRequestHistoryDatabaseEnabled}
srmCopyStoreCompletedRequestsOnly=${srmStoreCompletedRequestsOnly}


# ---- Directory entries to include in list reply
#
# Number of entries allowed to be returnes in a single srmls
# request. Directory listings larger than this most be broken into
# multiple requests.
#
srmLsMaxNumberOfEntries=1000


# ---- List recursion depth
#
# Maximum recursion depth.
#
srmLsMaxNumberOfLevels=100


srmLsRequestThreadQueueSize=1000
srmLsRequestThreadPoolSize=30
srmLsRequestMaxWaitingRequests=1000
srmLsRequestMaxNumberOfRetries=10
srmLsRequestRetryTimeout=60000
srmLsRequestMaxNumberOfRunningBySameOwner=100
srmLsRequestLifetime=3600000


srmLsDatabaseEnabled=${srmDatabaseEnabled}
srmLsCleanPendingRequestsOnRestart=${srmCleanPendingRequestsOnRestart}
srmLsKeepRequestHistoryPeriod=${srmKeepRequestHistoryPeriod}
srmLsExpiredRequestRemovalPeriod=${srmExpiredRequestRemovalPeriod}
srmLsRequestHistoryDatabaseEnabled=${srmRequestHistoryDatabaseEnabled}
srmLsStoreCompletedRequestsOnly=${srmStoreCompletedRequestsOnly}


# ---- Milliseconds until list requests are processed asynchronously
#
# Some SRM operations may be processed synchronously or
# asynchronously, at the server's discretion. dCache can start to
# process such requests synchronously and, if this is taking too long,
# reply asynchronously and continue to work on the operation
# background.
#
# This seting specifies the time in milliseconds after which put
# requests are handled asynchronously. Set to 'infinity' to disable
# asynchronous processing.
#
# Asynchronous processing avoids holding TCP connections to the server
# while the request is processed.
#
# Notice that not all clients support asynchronous listing. Set the
# property to 'infinity' if compatibility with these clients is
# required.
#
srmLsRequestSwitchToAsynchronousModeDelay=1000
(obsolete)srmAsynchronousLs=See srmLsRequestSwitchToAsynchronousModeDelay



srmReserveDatabaseEnabled=${srmDatabaseEnabled}
srmReserveCleanPendingRequestsOnRestart=${srmCleanPendingRequestsOnRestart}
srmReserveKeepRequestHistoryPeriod=${srmKeepRequestHistoryPeriod}
srmReserveExpiredRequestRemovalPeriod=${srmExpiredRequestRemovalPeriod}
srmReserveRequestHistoryDatabaseEnabled=${srmRequestHistoryDatabaseEnabled}
srmReserveStoreCompletedRequestsOnly=${srmStoreCompletedRequestsOnly}



srmGetLifeTime=14400000
srmBringOnlineLifeTime=${srmGetLifeTime}
srmPutLifeTime=14400000
srmCopyLifeTime=14400000


# ---- File system root exported by the srm service
pnfsSrmPath=/


# ---- Seconds before pool requests time out
srmPoolTimeout=300


# ---- Seconds before namespace operations time out
srmPnfsTimeout=300


# ---- Seconds ...
srmMoverTimeout=7200


# ---- Seconds before pool manager requests time out
srmPoolManagerTimeout=300


remoteCopyMaxTransfers=150
remoteHttpMaxTransfers=30
remoteGsiftpMaxTransfers=${srmCopyReqThreadPoolSize}
remoteGsiftpIoQueue=


# ---- Enable automatic creation of directories
#
# Allow automatic creation of directories via SRM.
#
# allow=true, disallow=false
#
RecursiveDirectoryCreation?=true


# ---- Allow delete via SRM
#
# Allow deletion of files via the SRM interface.
#
# allow=true, disallow=false
#
AdvisoryDelete?=true


# ---- Enable overwrite for SRM v1.1.
#
# Set the following property to true if you want overwrite to be
# enabled for the SRM v1.1 interface as well as for SRM v2.2 interface
# when client does not specify desired overwrite mode. This option
# will be considered only if the overwriteEnabled variable is set to
# true.
#
srmOverwriteByDefault=false


# ---- Number of concurrent file deletions
#
# To avoid starving other name space operations, the srm throttles
# bulk file deletion. This setting controls the number of concurrent
# file deletion requests submitted to PnfsManager.
#
srmSizeOfSingleRemoveBatch=100


# ---- Directory for delegated proxy certificates
#
# This is the directory in which the delegated user credentials will
# be stored as files. We recommend set permissions to 700 on this
# directory.
#
srmUserCredentialsDirectory=/var/lib/dcache/credentials


# ---- Login broker timeout in millis
srmLoginBrokerUpdatePeriod=3000


# ---- Number of doors in random door selection
#
# SRM will order doors according to their load and select sertain
# number of the least loaded and then randomly choose which one to
# use.
#
srmNumberOfDoorsInRandomSelection=5


# ---- Days before old transfers are removed from the database
#
# The srm will hold SRM requests and their history in database for
# srmKeepRequestHistoryPeriod days after that they will be removed.
#
srmKeepRequestHistoryPeriod=10


# ---- Seconds between removing old transfers from the database
#
# How frequently to remove old requests from the database.
#
srmExpiredRequestRemovalPeriod=60


# ---- Enables SRM request transition history logging
#
# Enables logging of transition history of SRM request in the
# database. The request transitions can be examined through the
# command line interface or through the the srmWatch monitoring tool.
#
# Enabling this feature increases the size and load of the database.
#
srmRequestHistoryDatabaseEnabled=false


# ---- Database threads
#
# Database updates are queued and their execution is decoupled from
# the execution of SRM requests. The setting controls the number of
# the threads that will be dedicated to execution of these updates.
#
srmJdbcExecutionThreadNum=5


# ---- Database request queue depth
#
# Database updates are queued and their execution is decoupled from
# the execution of SRM requests. The setting controls the maximum
# length of the queue.
#
srmMaxNumberOfJdbcTasksInQueue=1000


#
srmStoreCompletedRequestsOnly=false


#
srmDatabaseEnabled=true


# ---- Enable cleaning of pending requests during restart
#
# If enabled and the srm is restarted and there are pending requests
# their state will change to Failed or Done.
#
srmCleanPendingRequestsOnRestart=false


# ---- srmClientDNSLookup
#
# Perform the lookup of the client hostname based on the client's IP
# address. The result is used in pool selection. If srmClientDNSLookup
# is set to false the client's IP address is used.
#
srmClientDNSLookup=false


srmGracefulShutdown=2000



# ---- Enable custom address resolution.
#
# The srmCustomGetHostByAddr option enables a custom IP resolution
# if the standard InetAddress? method fails. Contributed by BNL.
#
srmCustomGetHostByAddr=false


# ---- Disable request protocol order
#
# Transfer protocols are negotiated between the SRM client and the srm
# service. The client provides an ordered list of protocols it
# supports. The server picks the first protocol it supports.
#
# When set to true, the server ignores the order. This is needed
# for some old srmcp clients.
#
srmIgnoreClientProtocolOrder=false


# ----- Whether to pin disk files
#
# The SRM protocol allows files to be pinned. The pin suppresses
# automatic garbage collection for the lifetime of the pin.
#
# Since dCache pools may be configured to only serve particular types
# of requests and not every pool may be configured to serve a
# particular read request, strict protocol compliance requires pinning
# even for disk only files.
#
# Often strict protocol compliance is however unnecessary, or disk
# files may be known to always be on read pools. In those cases one
# can skip the pinning step and thus reduce the latency of
# srmPrepareToGet request.
#
# When this property is set to false, files with access latency of
# ONLINE will not be pinned. If all files in the system have access
# latency of ONLINE, then the SRM will not use the pin manager at
# all. Note that when this property is set to false, orphaned file
# location entries in the name space will not validated during the
# srmPrepareToGet processing. The consequence is that the
# srmPrepareToGet may succeed for a lost and the subsequence file
# transfer will fail.
#
srmPinOnlineFiles=true


# ---- Quality of Service plugins
#
# to enable terapath plugin define:
# qosPluginClass=org.dcache.srm.qos.terapaths.TerapathsPlugin?
# qosConfigFile=${dcache.paths.config}/terapaths.properties
#
# to enable lambda station plugin define:
# qosPluginClass=org.dcache.srm.qos.terapaths.LambdaStation?
# qosConfigFile=${dcache.paths.config}/lambdastation.properties
#
qosPluginClass=
qosConfigFile=


srmImplicitSpaceManagerEnabled=true
srmSpaceReservationStrict=true



#
# Document which TCP ports are opened
#
(immutable)srm/net.ports.tcp=${port}



#
# Obsolete or Forbidden properties
#
(forbidden)srmDbHost=use srmDatabaseHost instead
(forbidden)srmPnfsManager=use pnfsmanager property, defined in the srm service
(forbidden)srmPoolManager=use the poolmanager property, defined the srm service
(forbidden)srmNumberOfDaysInDatabaseHistory=use srmKeepRequestHistoryPeriod instead
(forbidden)srmOldRequestRemovalPeriodSeconds=use srmExpiredRequestRemovalPeriod instead
(forbidden)srmJdbcMonitoringLogEnabled=use srmRequestHistoryDatabaseEnabled instead
(forbidden)srmJdbcSaveCompletedRequestsOnly=use srmStoreCompletedRequestsOnly instead
(forbidden)srmJdbcEnabled=use srmDatabaseEnabled instead





WEBADMIN


# -----------------------------------------------------------------------
# Default values for Webadmin doors
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for Webadmin
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- Authenticated Version of Webadmin
#
# All adminpages are only available in authenticated mode. When set to false
# webadmin needs less configuration and is more similar to the legacy product
webadminAuthenticated=false


# ---- TCP port for Webadmin
#
# Specifies the TCP port on which the Webadmin accepts http connections.
#
webadminHttpPort=8080


# ---- TCP port for Webadmin
#
# Specifies the TCP port on which the Webadmin accepts https connections.
#
webadminHttpsPort=8444


# ---- Path containing the webapp
#
# Specifies the path containing the .war file containing the Webadmin-Interface
# Application
webadminWebappsPath=${dcache.paths.classes}/webapps


# ---- Name of the dCache-Instance
#
# This Name will be displayed on some of the webpages as Header
webadminDCacheInstanceName=InstanceName?


# ---- GID a user has to have to be considered an Admin of webadmininterface
#
# When a user has this GID he can become an Admin for webadmininterface
webadminAdminGid=1000


# ---- Timeout for the data collecting cell
#
# timeout value in ms for the cell gathering data from dCache for webadmin
collectorTimeout=5000


# ---- Update time for the data collecting cell
#
# timeinterval between two transfer(also known as movers)-collector runs
# in milliseconds - be careful not to set much lower than 60, because the smaller
# the interval the higher the network load
transfersCollectorUpdate=60



# ---- Cell name
#
# the name of the webadmin cell.
#
webadmin/cell.name=webadmin


#
# Document which TCP ports are opened
#
(immutable)webadmin/net.ports.tcp-when-webadminAuthenticated-is-false=${webadminHttpPort}
(immutable)webadmin/net.ports.tcp-when-webadminAuthenticated-is-true=${webadminHttpPort} ${webadminHttpsPort}
(immutable)webadmin/net.ports.tcp=${net.ports.tcp-when-webadminAuthenticated-is-${webadminAuthenticated}}



WEBDAV


# -----------------------------------------------------------------------
# Default values for WebDAV doors
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for WebDAV
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- Name of WebDAV door
#
webdav/cell.name=WebDAV-${host.name}


# ---- PnfsManager timeout
#
# Specifies the timeout in milliseconds for communication with the
# PnfsManager cell.
#
webdavPnfsTimeout=120000


# ---- PoolManager timeout
#
# Specifies the timeout in milliseconds for communication with the
# PoolManager cell.
#
webdavPoolManagerTimeout=300000


# ---- Pool timeout
#
# Specifies the timeout in milliseconds for communication with the
# pool cells.
#
webdavPoolTimeout=10000


# ---- gPlazma timeout
#
# Specifies the timeout in milliseconds for communication with the
# gPlazma cell.
#
webdavGplazmaTimeout=180000



# ---- Mover kill timeout
#
# Specifies the timeout in milliseconds after which the WebDAV door
# gives up on waiting for a mover to respond to a kill request.
#
webdavKillTimeout=1500


# ---- Mover queue timeout
#
# Specifies the timeout in milliseconds after which the WebDAV door
# gives up on waiting for a mover to start. This places a bound on
# the maximum mover queue time.
#
webdavMoverTimeout=180000


# ---- Mover finished timeout
#
# Specifies the timeout in milliseconds for how long the WebDAV door
# waits for a confirmation from the mover after a transfer has
# completed.
#
webdavTransferConfirmationTimeout=60000


# ---- TCP port for WebDAV door
#
# Specifies the TCP port on which the WebDAV door accepts connections.
#
webdavPort=2880
webdav/port=${webdavPort}


# ---- What IP address to listen on for WebDAV door
#
# Specifies the IP address to which the WebDAV door listens for
# connections from clients. Defaults to the wildcard address.
#
webdavAddress=0.0.0.0


# ---- What IP address to use for connections from pools to the WebDAV door
#
# For uploads pools create a TCP connection to the WebDAV door.
# If empty, the WebDAV door will choose a local address. Notice
# that this address must not be a wildcard address.
#
webdavInternalAddress=


# ---- Whether to redirect GET requests to a pool
#
# If true, WebDAV doors will respond with a 302 redirect pointing to
# a pool holding the file. This requires that a pool can accept
# incoming TCP connections and that the client follows the
# redirect. If false, data is relayed through the door. The door
# will establish a TCP connection to the pool.
#
webdav.redirect.on-read=true


# ---- Root path of WebDAV door
#
# Specifies the root directory exposed through the WebDAV door. Used
# by both the WebDAV and SRM services.
#
webdavRootPath=/


# ---- Paths which are accessible through WebDAV
#
# This parameter is set to the absolute paths to directories
# accessible through WebDAV. Multiple directories are separated by a
# colon.
#
webdavAllowedPaths=/


# ---- Whether the WebDAV door is read only
#
# When set to true, only read operations are allowed through WebDAV.
#
webdavReadOnly=false


# ---- Whether existing files may be overwritten
#
# The WebDAV protocol specifies that a PUT overwrites existing files
# (but not directories). If this property is set to true, then
# dCache honors this aspect of the WebDAV specification. If set to
# false, attempts to overwrite existing files will be denied.
#
# Some clients expect that PUT indeed overwrites existing files. In
# particular Mac OS X is known to have issues writing and deleting
# files with dCache when this property is false.
#
webdav.overwrite=false


# ---- Level of access granted to anonymous users through WebDAV
#
# Valid values are: NONE, READONLY, FULL
# The default is: NONE
#
# Specifies which HTTP methods are granted to anonymous
# clients. NONE specifies that all anonymous requests will be
# rejected; READONLY specifies that only read requests are allowed
# (that is, GET, HEAD, OPTIONS and PROPFIND); FULL specifies that
# all HTTP methods are allowed.
#
# Anonymous clients are still subject to authorisation: Only
# operations with world access are granted.
#
webdavAnonymousAccess=NONE


# ---- Whether anonymous listing is allowed
#
# When false, unauthenticated users are prevented from listing the
# contents of directories. When true and webdavAnonymousAccess is
# not 'NONE' then unauthenticated users may list the contents of any
# world-readable directory.
#
webdavAnonymousListing=true


# ---- Mover queue
#
# The mover queue on the pool to which WebDAV transfers will be
# scheduled. If blank, the default queue will be used.
#
webdavIoQueue=


# ---- Whether to use HTTP or HTTPS for WebDAV
#
# Valid values: http, https, https-jglobus
#
# Specifies whether the HTTP or the HTTPS protocol is used. For
# HTTPS, a server certificate and a trust store need to be
# created.
#
# Alternatively to https, the https-jglobus option provides HTTPS
# support through the JGlobus library. JGlobus accesses the host and
# CA certificates in /etc/grid-security/ directly. In contrast to
# the plain Java SSL implementation JGlobus accepts proxy
# certificates, including VOMS proxy certificates. The protocol
# between the client and the server is however the same for https
# and https-jglobus.
#
webdavProtocol=http


# ---- Server certificate
#
# This parameter specifies the path to the file containing the
# PKCS12 encoded server certificate. When using https as the
# webdavProtocol, the host certificate in /etc/grid-security/ needs
# to be converted to PKCS12 format before it can be used with the
# WebDAV door. Use the 'bin/dcache import hostcert' command to
# perform this task.
#
webdavKeyStore=${keyStore}


# ---- Password for server certificate
#
# This parameter specifies the password with which the PKCS12 encoded
# server certificate is encrypted.
#
webdavKeyStorePassword=${keyStorePassword}


# ---- Trusted CA certificates
#
# This parameter specifies the path to a Java Keystore containing
# the the trusted CA certicates used by the WebDAV door. When using
# https as the webdavProtocol, the CA certificates in
# /etc/grid-security/certificates/ need to be converted into a Java
# Keystore file before they can be used with the WebDAV door. Use
# the 'bin/dcache import cacerts' command to perform this task.
#
webdavTrustStore=${trustStore}


# ---- Password for trusted CA certificates
#
# This parameter specifies the password with which the Java Keystore
# containing the trusted CA certificates is encrypted.
#
webdavTrustStorePassword=${trustStorePassword}


# ---- Whether client certificates are accepted for HTTPS
#
# This parameter specifies whether the WebDAV door will accept a client
# certificate for authentication.
#
webdavWantClientAuth=true


# ---- Whether client certificates are required for HTTPS
#
# This parameter specifies whether the WebDAV door will require a
# client certificate for authentication.
#
webdavNeedClientAuth=false


# ---- Whether HTTP Basic authentication is enabled
#
# When enabled a user name and password will be requested on
# authorization failures.
#
# Note that HTTP Basic authentication essentially transfers
# passwords in clear text. A secure setup should only use HTTP Basic
# authentication over HTTPS.
#
webdavBasicAuthentication=false


# ---- Location for static content
#
# The WebDAV door provides HTML renderings of directories and error
# messages. The artwork and other static content used in the HTML
# rendering is exposed through the WebDAV door itself in a virtual
# directory.
#
# This parameter specifies the location to use for the virtual
# directory. The virtual directory masks any real directory with the
# same path in dCache's name space.
#
webdav.static-content.location=/.webdav


# ---- Directory with default static content
#
# The directory in the local file system containing the default
# artwork and other static content used in the WebDAV door's HTML
# renderings.
#
webdav.static-content.dir.default=${dcache.paths.share}/webdav/static


# ---- Directory with custom static content
#
# The directory in the local file system containing custom artwork
# and other static content used in the WebDAV door's HTML
# renderings. Any file placed in this directory masks files by the
# same name in the default static contet directory.
#
webdav.static-content.dir.local=/var/lib/dcache/webdav/local


# ---- Base URI for static content
#
# The base URI indicating the location of artwork and other static
# content used in the WebDAV door's HTML renderings.
#
# This is exposed as the string $static$ inside templates.
#
webdav.static-content.uri=${webdav.static-content.location}


# ---- Path to HTML template for directory listing
#
# To customize the look and feel of the HTML rendering of a
# directory, modify a copy of this file and redefine the property to
# point to the copy.
#
webdav.templates.html=file:${dcache.paths.share}/webdav/templates/html.stg


#
# Document which TCP ports are opened
#
(immutable)webdav/net.ports.tcp=${port}


# ---- Artwork elements
#
# These properties are obsolete. Modify the template to customize the
# look and feel.
#
(obsolete)webdav.images.logo=modify the template to customize the look and feel
(obsolete)webdav.images.directory=modify the template to customize the look and feel
(obsolete)webdav.images.file=modify the template to customize the look and feel
(obsolete)webdav.style.css=modify the template to customize the look and feel
(obsolete)webdavLogoPath=modify the template to customize the look and feel
(obsolete)webdavDirIconPath=modify the template to customize the look and feel
(obsolete)webdavFileIconPath=modify the template to customize the look and feel
(obsolete)webdavCssPath=modify the template to customize the look and feel
(obsolete)logoPath=modify the template to customize the look and feel
(obsolete)dirIconPath=modify the template to customize the look and feel
(obsolete)fileIconPath=modify the template to customize the look and feel
(obsolete)cssPath=modify the template to customize the look and feel
(forbidden)webdav.templates.list=Use webdav.templates.html instead
(forbidden)httpPortNumber=use webdavPort instead
(forbidden)httpRootPath=use webdavRootPath instead
(forbidden)httpAllowedPaths=use webdavAllowedPaths instead
(forbidden)webdavContextPath=use webdav.static-content.location instead





XROOTD-ALICE-TOKEN


# -----------------------------------------------------------------------
# Default values for alice-token xrootd authorization plugin
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for alice-token
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- Key store file
#
# Keystore file that holds the keypairs needed to do the token based
# authorisation. An example keystore file can be found in
# share/examples/xrootd/keystore. Copy this file into
# ${dcache.paths.etc} and modify as appropriate.
xrootdAuthzKeystore=${dcache.paths.etc}/keystore


# ---- Obsolete properties
#
(obsolete)nostrongauthorization=



XROOTD-GSI


xrootd.gsi.hostcert.key=/etc/grid-security/hostkey.pem
xrootd.gsi.hostcert.cert=/etc/grid-security/hostcert.pem
xrootd.gsi.hostcert.refresh=${hostCertificateRefreshPeriod}
xrootd.gsi.hostcert.verify=${verifyHostCertificateChain}
xrootd.gsi.ca.path=/etc/grid-security/certificates
xrootd.gsi.ca.refresh=${trustAnchorRefreshPeriod}



XROOTD


# -----------------------------------------------------------------------
# Default values for xrootd
# -----------------------------------------------------------------------
#
# This Java properties file contains default values for xrootd
# configuration parameters. All values can be redefined in
# etc/dcache.conf. Do not modify any values here as your changes
# will be lost when you next upgrade.


# ---- Name of Xrootd door
#
xrootd/cell.name=Xrootd-${host.name}


# ---- TCP port for Xrootd door
#
# Specifies the TCP port on which the Xrootd door accepts connections.
#
xrootdPort=1094
xrootd/port=${xrootdPort}


# ---- Worker thread limit
#
# Maximum number of work threads used by the door. Since the worker
# threads block on name space and pool manager operations, a
# relatively large value is needed.
#
xrootdThreads=1000


# ---- TCP backlog used by xrootd
#
# 1024 is typically the largest value allowed by the OS.
#
xrootdBacklog=1024


# ---- Queue memory limit
#
# Memory limit in bytes for xrootd frames queued for processing by a
# worker thread.
#
xrootdMaxTotalMemorySize=16777216


# ---- Per connection queue memory limit
#
# Per connection memory limit in bytes for xrootd frames queued for
# processing by a worker thread.
#
xrootdMaxChannelMemorySize=16777216


# ---- PoolManager timeout
#
# Specifies the timeout in milliseconds for communication with the
# PoolManager cell.
#
xrootdPoolManagerTimeout=5400000


# ---- Pool timeout
#
# Specifies the timeout in milliseconds for communication with the
# pool cells.
#
xrootdPoolTimeout=15000


# ---- Mover queue timeout
#
# Specifies the timeout in milliseconds after which the xrootd door
# gives up on waiting for a mover to start. This places a bound on
# the maximum mover queue time.
#
xrootdMoverTimeout=180000


# ---- Root path of Xrootd door
#
# Specifies the root directory exposed through the Xrootd door. Used
# by both the xrootd and SRM services.
#
xrootdRootPath=/


# ---- Global read-only
#
# This variable controls whether any write access is permitted.
# This is to avoid any unauthenticated writes. The variable
# overrides all other authorization settings.
#
xrootdIsReadOnly=true


# ---- Allowed paths
#
# These parameters are set to the absolute paths of directories
# accessible through Xrootd. Multiple directories are separated by a
# colon. Different directories may be specified for reads and
# writes. An empty list will disallow access to all directories.
#
xrootdAllowedPaths=/
xrootdAllowedReadPaths=${xrootdAllowedPaths}
xrootdAllowedWritePaths=${xrootdAllowedPaths}


# ---- Used authentication
#
# The authentication plugin. Currently legal values for this property are
#
# none
# gsi
#
# where "none" means that no authentication is performed, while "gsi" means
# that any xrootd request to the door will be GSI encrypted.
xrootdAuthNPlugin=none


# ---- Verification of the issuer chain of the host certificate (GSI sec)
#
# This can have advantages and disadvantages. If the used host certificates
# are in a Grid environment, where they are supposed to be signed by trusted
# CA certificates, setting this to true establishes a fail-fast behaviour.
#
# If the certificates are self-signed or signed by a custom-CA, this value
# should be set to false.
verifyHostCertificateChain=true


# ---- Authorization plugin
#
# The authorization plugin provides a policy decision point (PDP)
# for authorization decisions in the xrootd door. An authorization
# plugin can also perform LFN to PFN mappings.
#
# Third party plugins can be used by adding the plugin to the plugin
# directory of dCache and specify the plugin name here.
#
xrootdAuthzPlugin=none


# ---- User identity used for authorizing operations
#
# As xrootd requests are not authenticated, an identity has to be
# chosen for authorizing operations. All operations are performed as
# this identity.
#
# The authorization controlled by this parameter is different from
# the authorization performed by the authorization plugin: The
# authorization plugin validates the requests themselves
# indepedently of the file which is accessed. E.g. the token based
# authorization verifies that the request contains a
# cryptopgrahically signed token from a trusted source.
#
# Once the request is authorized it is subject to further
# authorization by other components in dCache, e.g. PnfsManager or
# PoolManager. Those authorizations happen wrt. the identity defined
# by xrootdUser.
#
# The default is 'nobody', meaning that only world-permissible
# operations are allowed. Other valid values are 'root' (no further
# authorization) and a string on the form UID:GID[,GID...], i.e. a
# numerical UID followed by one or more numerical GIDs. The first
# GID is the primary one.
#
xrootdUser=nobody


# ---- Mover queue
#
# The mover queue on the pool to which this request will be
# scheduled.
#
xrootdIoQueue=


# ---- Mover-idle timeout
#
# Specifies the timeout in milliseconds before clients that connect to the
# pool request handler but don't open any files will be disconnected.
xrootd.mover.timeout.idle=300000


#
# Document which TCP ports are opened
#
(immutable)xrootd/net.ports.tcp = ${port}