How To Use Systemctl to Manage Systemd Services and Units

Introduction

Systemd is an init system and system manager that is widely becoming the new standard for Linux machines. While there are considerable opinions about whether systemd is an improvement over the traditional SysV init systems it is replacing, the majority of distributions plan to adopt it or have already done so.

Due to its heavy adoption, familiarizing yourself with systemd is well worth the trouble, as it will make administrating these servers considerably easier. Learning about and utilizing the tools and daemons that comprise systemd will help you better appreciate the power, flexibility, and capabilities it provides, or at least help you to do your job with minimal hassle.

In this guide, we will be discussing the systemctl command, which is the central management tool for controlling the init system. We will cover how to manage services, check statuses, change system states, and work with the configuration files.

Service Management

The fundamental purpose of an init system is to initialize the components that must be started after the Linux kernel is booted (traditionally known as “userland” components). The init system is also used to manage services and daemons for the server at any point while the system is running. With that in mind, we will start with some simple service management operations.

In systemd, the target of most actions are “units”, which are resources that systemd knows how to manage. Units are categorized by the type of resource they represent and they are defined with files known as unit files. The type of each unit can be inferred from the suffix on the end of the file.

For service management tasks, the target unit will be service units, which have unit files with a suffix of.service. However, for most service management commands, you can actually leave off the .servicesuffix, as systemd is smart enough to know that you probably want to operate on a service when using service management commands.

Starting and Stopping Services

To start a systemd service, executing instructions in the service’s unit file, use the start command. If you are running as a non-root user, you will have to use sudo since this will affect the state of the operating system:

sudo systemctl start application.service

As we mentioned above, systemd knows to look for *.service files for service management commands, so the command could just as easily be typed like this:

sudo systemctl start application

Although you may use the above format for general administration, for clarity, we will use the .servicesuffix for the remainder of the commands to be explicit about the target we are operating on.

To stop a currently running service, you can use the stop command instead:

sudo systemctl stop application.service

Restarting and Reloading

To restart a running service, you can use the restart command:

sudo systemctl restart application.service

If the application in question is able to reload its configuration files (without restarting), you can issue thereload command to initiate that process:

sudo systemctl reload application.service

If you are unsure whether the service has the functionality to reload its configuration, you can issue thereload-or-restart command. This will reload the configuration in-place if available. Otherwise, it will restart the service so the new configuration is picked up:

sudo systemctl reload-or-restart application.service

Enabling and Disabling Services

The above commands are useful for starting or stopping commands during the current session. To tellsystemd to start services automatically at boot, you must enable them.

To start a service at boot, use the enable command:

sudo systemctl enable application.service

This will create a symbolic link from the system’s copy of the service file (usually in /lib/systemd/systemor /etc/systemd/system) into the location on disk where systemd looks for autostart files (usually/etc/systemd/system/some_target.target.wants. We will go over what a target is later in this guide).

To disable the service from starting automatically, you can type:

sudo systemctl disable application.service

This will remove the symbolic link that indicated that the service should be started automatically.

Keep in mind that enabling a service does not start it in the current session. If you wish to start the service and enable it at boot, you will have to issue both the start and enable commands.

Checking the Status of Services

To check the status of a service on your system, you can use the status command:

systemctl status application.service

This will provide you with the service state, the cgroup hierarchy, and the first few log lines.

For instance, when checking the status of an Nginx server, you may see output like this:

● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2015-01-27 19:41:23 EST; 22h ago
 Main PID: 495 (nginx)
   CGroup: /system.slice/nginx.service
           ├─495 nginx: master process /usr/bin/nginx -g pid /run/nginx.pid; error_log stderr;
           └─496 nginx: worker process

Jan 27 19:41:23 desktop systemd[1]: Starting A high performance web server and a reverse proxy server...
Jan 27 19:41:23 desktop systemd[1]: Started A high performance web server and a reverse proxy server.

This gives you a nice overview of the current status of the application, notifying you of any problems and any actions that may be required.

There are also methods for checking for specific states. For instance, to check to see if a unit is currently active (running), you can use the is-active command:

systemctl is-active application.service

This will return the current unit state, which is usually active or inactive. The exit code will be “0” if it is active, making the result simpler to parse programatically.

To see if the unit is enabled, you can use the is-enabled command:

systemctl is-enabled application.service

This will output whether the service is enabled or disabled and will again set the exit code to “0” or “1” depending on the answer to the command question.

A third check is whether the unit is in a failed state. This indicates that there was a problem starting the unit in question:

systemctl is-failed application.service

This will return active if it is running properly or failed if an error occurred. If the unit was intentionally stopped, it may return unknown or inactive. An exit status of “0” indicates that a failure occurred and an exit status of “1” indicates any other status.

System State Overview

The commands so far have been useful for managing single services, but they are not very helpful for exploring the current state of the system. There are a number of systemctl commands that provide this information.

Listing Current Units

To see a list of all of the active units that systemd knows about, we can use the list-units command:

systemctl list-units

This will show you a list of all of the units that systemd currently has active on the system. The output will look something like this:

UNIT                                      LOAD   ACTIVE SUB     DESCRIPTION
atd.service                               loaded active running ATD daemon
avahi-daemon.service                      loaded active running Avahi mDNS/DNS-SD Stack
dbus.service                              loaded active running D-Bus System Message Bus
dcron.service                             loaded active running Periodic Command Scheduler
dkms.service                              loaded active exited  Dynamic Kernel Modules System
getty@tty1.service                        loaded active running Getty on tty1

. . .

The output has the following columns:

  • UNIT: The systemd unit name
  • LOAD: Whether the unit’s configuration has been parsed by systemd. The configuration of loaded units is kept in memory.
  • ACTIVE: A summary state about whether the unit is active. This is usually a fairly basic way to tell if the unit has started successfully or not.
  • SUB: This is a lower-level state that indicates more detailed information about the unit. This often varies by unit type, state, and the actual method in which the unit runs.
  • DESCRIPTION: A short textual description of what the unit is/does.

Since the list-units command shows only active units by default, all of the entries above will show “loaded” in the LOAD column and “active” in the ACTIVE column. This display is actually the default behavior of systemctl when called without additional commands, so you will see the same thing if you call systemctl with no arguments:

systemctl

We can tell systemctl to output different information by adding additional flags. For instance, to see all of the units that systemd has loaded (or attempted to load), regardless of whether they are currently active, you can use the --all flag, like this:

systemctl list-units --all

This will show any unit that systemd loaded or attempted to load, regardless of its current state on the system. Some units become inactive after running, and some units that systemd attempted to load may have not been found on disk.

You can use other flags to filter these results. For example, we can use the --state= flag to indicate the LOAD, ACTIVE, or SUB states that we wish to see. You will have to keep the --all flag so that systemctlallows non-active units to be displayed:

systemctl list-units --all --state=inactive

Another common filter is the --type= filter. We can tell systemctl to only display units of the type we are interested in. For example, to see only active service units, we can use:

systemctl list-units --type=service

Listing All Unit Files

The list-units command only displays units that systemd has attempted to parse and load into memory. Since systemd will only read units that it thinks it needs, this will not necessarily include all of the available units on the system. To see every available unit file within the systemd paths, including those that systemd has not attempted to load, you can use the list-unit-files command instead:

systemctl list-unit-files

Units are representations of resources that systemd knows about. Since systemd has not necessarily read all of the unit definitions in this view, it only presents information about the files themselves. The output has two columns: the unit file and the state.

UNIT FILE                                  STATE   
proc-sys-fs-binfmt_misc.automount          static  
dev-hugepages.mount                        static  
dev-mqueue.mount                           static  
proc-fs-nfsd.mount                         static  
proc-sys-fs-binfmt_misc.mount              static  
sys-fs-fuse-connections.mount              static  
sys-kernel-config.mount                    static  
sys-kernel-debug.mount                     static  
tmp.mount                                  static  
var-lib-nfs-rpc_pipefs.mount               static  
org.cups.cupsd.path                        enabled

. . .

The state will usually be “enabled”, “disabled”, “static”, or “masked”. In this context, static means that the unit file does not contain an “install” section, which is used to enable a unit. As such, these units cannot be enabled. Usually, this means that the unit performs a one-off action or is used only as a dependency of another unit and should not be run by itself.

We will cover what “masked” means momentarily.

Unit Management

So far, we have been working with services and displaying information about the unit and unit files thatsystemd knows about. However, we can find out more specific information about units using some additional commands.

Displaying a Unit File

To display the unit file that systemd has loaded into its system, you can use the cat command (this was added in systemd version 209). For instance, to see the unit file of the atd scheduling daemon, we could type:

systemctl cat atd.service
[Unit]
Description=ATD daemon

[Service]
Type=forking
ExecStart=/usr/bin/atd

[Install]
WantedBy=multi-user.target

The output is the unit file as known to the currently running systemd process. This can be important if you have modified unit files recently or if you are overriding certain options in a unit file fragment (we will cover this later).

Displaying Dependencies

To see a unit’s dependency tree, you can use the list-dependencies command:

systemctl list-dependencies sshd.service

This will display a hierarchy mapping the dependencies that must be dealt with in order to start the unit in question. Dependencies, in this context, include those units that are either required by or wanted by the units above it.

sshd.service
├─system.slice
└─basic.target
  ├─microcode.service
  ├─rhel-autorelabel-mark.service
  ├─rhel-autorelabel.service
  ├─rhel-configure.service
  ├─rhel-dmesg.service
  ├─rhel-loadmodules.service
  ├─paths.target
  ├─slices.target

. . .

The recursive dependencies are only displayed for .target units, which indicate system states. To recursively list all dependencies, include the --all flag.

To show reverse dependencies (units that depend on the specified unit), you can add the --reverse flag to the command. Other flags that are useful are the --before and --after flags, which can be used to show units that depend on the specified unit starting before and after themselves, respectively.

Checking Unit Properties

To see the low-level properties of a unit, you can use the show command. This will display a list of properties that are set for the specified unit using a key=value format:

systemctl show sshd.service
Id=sshd.service
Names=sshd.service
Requires=basic.target
Wants=system.slice
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=syslog.target network.target auditd.service systemd-journald.socket basic.target system.slice
Description=OpenSSH server daemon

. . .

If you want to display a single property, you can pass the -p flag with the property name. For instance, to see the conflicts that the sshd.service unit has, you can type:

systemctl show sshd.service -p Conflicts
Conflicts=shutdown.target

Masking and Unmasking Units

We saw in the service management section how to stop or disable a service, but systemd also has the ability to mark a unit as completely unstartable, automatically or manually, by linking it to /dev/null. This is called masking the unit, and is possible with the mask command:

sudo systemctl mask nginx.service

This will prevent the Nginx service from being started, automatically or manually, for as long as it is masked.

If you check the list-unit-files, you will see the service is now listed as masked:

systemctl list-unit-files
. . .

kmod-static-nodes.service              static  
ldconfig.service                       static  
mandb.service                          static  
messagebus.service                     static  
nginx.service                          masked
quotaon.service                        static  
rc-local.service                       static  
rdisc.service                          disabled
rescue.service                         static

. . .

If you attempt to start the service, you will see a message like this:

sudo systemctl start nginx.service
Failed to start nginx.service: Unit nginx.service is masked.

To unmask a unit, making it available for use again, simply use the unmask command:

sudo systemctl unmask nginx.service

This will return the unit to its previous state, allowing it to be started or enabled.

Editing Unit Files

While the specific format for unit files is outside of the scope of this tutorial, systemctl provides builtin mechanisms for editing and modifying unit files if you need to make adjustments. This functionality was added in systemd version 218.

The edit command, by default, will open a unit file snippet for the unit in question:

sudo systemctl edit nginx.service

This will be a blank file that can be used to override or add directives to the unit definition. A directory will be created within the /etc/systemd/system directory which contains the name of the unit with .dappended. For instance, for the nginx.service, a directory called nginx.service.d will be created.

Within this directory, a snippet will be created called override.conf. When the unit is loaded, systemdwill, in memory, merge the override snippet with the full unit file. The snippet’s directives will take precedence over those found in the original unit file.

If you wish to edit the full unit file instead of creating a snippet, you can pass the --full flag:

sudo systemctl edit --full nginx.service

This will load the current unit file into the editor, where it can be modified. When the editor exits, the changed file will be written to /etc/systemd/system, which will take precedence over the system’s unit definition (usually found somewhere in /lib/systemd/system).

To remove any additions you have made, either delete the unit’s .d configuration directory or the modified service file from /etc/systemd/system. For instance, to remove a snippet, we could type:

sudo rm -r /etc/systemd/system/nginx.service.d

To remove a full modified unit file, we would type:

sudo rm /etc/systemd/system/nginx.service

After deleting the file or directory, you should reload the systemd process so that it no longer attempts to reference these files and reverts back to using the system copies. You can do this by typing:

sudo systemctl daemon-reload

Adjusting the System State (Runlevel) with Targets

Targets are special unit files that describe a system state or synchronization point. Like other units, the files that define targets can be identified by their suffix, which in this case is .target. Targets do not do much themselves, but are instead used to group other units together.

This can be used in order to bring the system to certain states, much like other init systems use runlevels. They are used as a reference for when certain functions are available, allowing you to specify the desired state instead of the individual units needed to produce that state.

For instance, there is a swap.target that is used to indicate that swap is ready for use. Units that are part of this process can sync with this target by indicating in their configuration that they are WantedBy= orRequiredBy= the swap.target. Units that require swap to be available can specify this condition using the Wants=, Requires=, and After= specifications to indicate the nature of their relationship.

Getting and Setting the Default Target

The systemd process has a default target that it uses when booting the system. Satisfying the cascade of dependencies from that single target will bring the system into the desired state. To find the default target for your system, type:

systemctl get-default
multi-user.target

If you wish to set a different default target, you can use the set-default. For instance, if you have a graphical desktop installed and you wish for the system to boot into that by default, you can change your default target accordingly:

sudo systemctl set-default graphical.target

Listing Available Targets

You can get a list of the available targets on your system by typing:

systemctl list-unit-files --type=target

Unlike runlevels, multiple targets can be active at one time. An active target indicates that systemd has attempted to start all of the units tied to the target and has not tried to tear them down again. To see all of the active targets, type:

systemctl list-units --type=target

Isolating Targets

It is possible to start all of the units associated with a target and stop all units that are not part of the dependency tree. The command that we need to do this is called, appropriately, isolate. This is similar to changing the runlevel in other init systems.

For instance, if you are operating in a graphical environment with graphical.target active, you can shut down the graphical system and put the system into a multi-user command line state by isolating themulti-user.target. Since graphical.target depends on multi-user.target but not the other way around, all of the graphical units will be stopped.

You may wish to take a look at the dependencies of the target you are isolating before performing this procedure to ensure that you are not stopping vital services:

systemctl list-dependencies multi-user.target

When you are satisfied with the units that will be kept alive, you can isolate the target by typing:

sudo systemctl isolate multi-user.target

Using Shortcuts for Important Events

There are targets defined for important events like powering off or rebooting. However, systemctl also has some shortcuts that add a bit of additional functionality.

For instance, to put the system into rescue (single-user) mode, you can just use the rescue command instead of isolate rescue.target:

sudo systemctl rescue

This will provide the additional functionality of alerting all logged in users about the event.

To halt the system, you can use the halt command:

sudo systemctl halt

To initiate a full shutdown, you can use the poweroff command:

sudo systemctl poweroff

A restart can be started with the reboot command:

sudo systemctl reboot

These all alert logged in users that the event is occurring, something that simply running or isolating the target will not do. Note that most machines will link the shorter, more conventional commands for these operations so that they work properly with systemd.

For example, to reboot the system, you can usually type:

sudo reboot

Conclusion

By now, you should be familiar with some of the basic capabilities of the systemctl command that allow you to interact with and control your systemd instance. The systemctl utility will be your main point of interaction for service and system state management.

While systemctl operates mainly with the core systemd process, there are other components to thesystemd ecosystem that are controlled by other utilities. Other capabilities, like log management and user sessions are handled by separate daemons and management utilities (journald/journalctl andlogind/loginctl respectively). Taking time to become familiar with these other tools and daemons will make management an easier task.

SSH Without Password

1. Generate the public key private key pair 
Generate the public key private key pair for the local host as following , Press enter for default file names and no
pass phrase options. The command here generates RSA type keys.

[web@localhost ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/web/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/web/.ssh/id_rsa.
Your public key has been saved in /home/web/.ssh/id_rsa.pub.
The key fingerprint is:
5e:30:d3:1a:00:c5:0b:29:96:ac:3e:42:20:dc:af:38 web@localhost.localdomain

You can run the command ssh-keygen from any directory but the id files will be generated in .ssh dir of user’s home directory.

2. Change directory to .ssh directory.

[web@localhost ~]$ cd /home/web.ssh

You will see two files starting with id_rsa. id_rsa is the private key and id_rsa.pub is public key. Check the date time stamp of these files to make sure these are the ones you generated recently.

/.ssh[web@localhost .ssh]$ ls -la
total 32
drwx—— 2 web web 4096 Dec 7 22:05 .
drwx—— 34 web web 12288 Dec 7 22:04 ..
-rw——- 1 web web 1675 Dec 7 22:05 id_rsa
-rw-r–r– 1 web web 407 Dec 7 22:05 id_rsa.pub
-rw-r–r– 1 web web 391 Dec 7 22:03 known_hosts

/.ssh[web@localhost .ssh]$ date
Tue Dec 7 22:05:45 PST 2010

3. Copy the rsa public key to the remote host . You have to copy the public key file in to .ssh of the user home directory and if .ssh directory is not there , create it as in the example below.
You need to enter sftp/ssh  password .

/.ssh[web@localhost .ssh]$ sftp james@devserver
Connecting to devserver…
james@devserver’s password:
sftp> pwd
Remote working directory: /home/james
sftp> cd .ssh
Couldn’t canonicalise: No such file or directory
sftp> mkdir .ssh
sftp> cd .ssh
sftp> put id_rsa.pub
Uploading id_rsa.pub to /home/james/.ssh/id_rsa.pub
id_rsa.pub 0% 0 0.0KB/s –:– ETAid_rsa.pub 100% 407 0.4KB/s 00:00
sftp> 

4. login to the remote host  with password

Once file is copied over , login to the remote host using ssh and password and go to .ssh directory under user home directory.
/.ssh[web@localhost .ssh]$ ssh james@devserver
james@devserver’s password:

james@devserver:~[james@devserver ~]$ cd .ssh
james@devserver:~/.ssh[james@devserver .ssh]$ pwd
/home/james/.ssh

james@devserver:~/.ssh[james@devserver .ssh]$ ls -l
total 4
-rw-r–r– 1 james james 407 Dec 7 22:06 id_rsa.pub

5. Rename the public key file to authorized_keys ;
if the authorized_keys file already exists then append the new keys to the existing file using,
cat id_rsa.pub >> authorized_keys .
Don’t use vi or editor to open , append and save these key files as any extra character/newline would corrupt these files.

james@devserver:~/.ssh[james@devserver .ssh]$ mv id_rsa.pub authorized_keys

You can see the contents using cat command
james@devserver:~/.ssh[james@devserver .ssh]$ cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEArVWhE0L2FXNvmggZgqmGU
LVrcE4X7WQr6scSuU5FCQUsXzYjyOL8FbUIIkBeLLMIrV7mYa+
xuszHcvnAho/42/e4r5by8LVMyh0AAo7nketemkO/2ZiUXZhww7tySxgcI5U5L5PDmTCyF7vxLlJ0rGb7Ky//DtpKrBui5P4gIrKBeiA2TlbEL9UrQZ8HgTU3iSGtfUXH0O
26iLSWi6Tf40hEazvvVYESHPSBjUPIMqUGabtz1kKMDQB5x
C+F2MZ4lUCmgK2NexrhVWOrp7ODS1GlKsjSv6NSxOIVW0je
V00ZW9Fvgz865g+fakBITqYP76ptPIVXEps+91ABRSwggQ== web@localhost.localdomain

6. Change the key file and directory permissions 

ssh is very sensitive to permissions so you have to change the key file and directory permissions for it to work.

6a : Change authorized_keys to 600 permissions

james@devserver:~/.ssh[james@devserver .ssh]$ chmod 600 authorized_keys
james@devserver:~/.ssh[james@devserver .ssh]$ ls -ltr
total 8
-rw-r–r– 1 james james 407 Dec 7 22:06 id_rsa.pub
-rw——- 1 james james 407 Dec 7 22:08 authorized_keys

james@devserver:~/.ssh[james@devserver .ssh]$ cd ..

6b : Change .ssh directory to 700 permission
james@devserver:~[james@devserver ~]$ chmod 700 .ssh

6c :Verify permissions and log out . 
james@devserver:~[james@devserver ~]$ logout
Connection to localhost closed.

8.  Moment of truth : Try a ssh

/.ssh[web@localhost .ssh]$ ssh james@devserver
Last login: Tue Dec 7 22:07:04 2010 from localhost.localdomain
james@devserver:~[james@devserver ~]$ pwd
/home/james

Here we have no password secure access working .

The most common problems  can be  

1. Incorrect permission for .ssh and authorized_keys file

2. Corrupt key file , regenerate and copy again.

Backup Configuration

For cPanel & WHM 54

( Home >> Backup >> Backup Configuration )

Overview

The Backup Configuration interface allows system administrators to customize their scheduled backups.

Warning:

We strongly recommend that you only use the Backup Configuration interface to configure backups of your server. We plan to remove the Legacy Backup Configuration interface (Home >> Backup >> Legacy Backup Configuration) from WHM in a future release.

Note:

The system applies the current Backup Configuration settings to accounts that you create or transfer.

To enable and run backups, you must complete both of the following steps:

  1. Select Enable for the Backup Status setting in this interface’s Global Settings section.
  2. Select your desired settings for the Backup Configuration feature.
  3. Use the Backup User Selection interface (Home >> Backup >> Backup User Selection) to enable backups for each desired cPanel user.

    Note:

    To enable or disable the Backup Configuration feature for all users, select the checkbox in the top right corner of the Backup User Selection interface.

Global Settings

You can configure the following global backup settings:

Setting
Description
Backup Status Enable this setting to run the updated Backup Configuration feature. This setting defaults to Enable.
Backup Type Choose one of the following options to determine how the system stores backup files:

  • Compressed — Select this setting to save all of your information in a compressed format. This setting uses less disk space, but requires more time to complete.
  • Uncompressed — Select this setting to save all of your information in an uncompressed format. This setting uses more disk space, but is faster than Compressed backups.
  • Incremental — Select this setting to store only one uncompressed backup file. The system saves updated account information to the existing backup file, and remove the old information that these updates replace. This setting limits your restoration settings, but runs faster and uses less disk space than other backup types.

    Warning:

    If you choose to use Incremental backups, you cannot restore the account to a time before your last backup.

Maximum destination timeout in seconds Enter the maximum number of seconds to allow a backup process to upload a single backup file or restore a single backup file.

Note:

We strongly recommend that you enter a number of seconds that is large enough for the system to upload your largest backup file.

Scheduling and Retention

The Scheduling and Retention settings allow you to specify when the backup process runs. You may choose to run backups on a daily, weekly, or monthly basis, or you may use a combination of these settings. Select the checkboxes that correspond to the timing settings that you wish to use.

Note:

You must select at least one of the following settings.

Setting
Description
Backup Daily Your system creates a new backup on each of the days of the week that you select. When you select this setting, you mustalso configure the following settings:

  1. Select the days of the week on which you wish to run backups.
  2. In the Retain Daily backups text box, enter the maximum number of daily backup files to store on your system at any given time. Enter any number between 1 and 9999.
Backup Weekly Your system creates a new backup once each week, on the day that you select. When you select this setting, you must also configure the following settings:

  1. Select the day of the week on which you wish to run backups.
  2. In the Retain Weekly backups text box, enter the maximum number of weekly backups to store on your system at any given time. Enter any number between 1 and 9999.
Backup Monthly Your system creates a new backup either once or twice per week, on the days of the month that you select. When you select this setting, you must also configure the following settings:

  1. Select the day or days of the month on which you wish to run backups.
  2. In the Retain Monthly backups text box, enter the maximum number of monthly backups to store on your system at any given time. Enter any number between 1 and 9999.

Note:

If you run daily and monthly backups on the same day, the daily backup runs first, and then the monthly backup copies the daily backup.

Files

The Files settings allow you to configure the information that you wish to back up. Select the checkboxes that correspond to the settings that you wish to use.

Warning:

You must select either the Backup Accounts or Backup System Files checkbox in order to run backups.

Setting
Description
Backup Accounts Back up the user files in each cPanel user’s home directory.

Note:

Click Select Users to open the Backup User Selection interface (Home >> Backup >> Backup User Selection).

After you select this setting, select the type of data to include in the backup file:

  • Backup Suspended Accounts — Select the Enable button to back up suspended accounts.

    Warning:

    If you do not enable this option, your server will not back up suspended accounts, regardless of their settings in the Backup User Selection interface ( Home >> Backup >> Backup User Selection ).

  • Backup Access Logs — Select the Enable button to back up your server’s access logs and the/usr/local/cpanel/domlogs file.
  • Backup Bandwidth Data — Select the Enable button to back up your server’s bandwidth data.
  • Use Local DNS — Select the method to use to back up DNS information:
    • Disable — The system backs up DNS information from the DNS cluster.
    • Enable — The system backs up DNS information from the server for the domain.
Backup System Files Back up your server’s system files.

Notes:

  • The system stores many of these files in the /etc directory.
  • You must enable this setting for server restoration, but it is not necessary for account restoration. We strongly recommend that you enable this setting.
  • For more information, read our System Backups documentation.

Databases

Select one of the following options for the Backup SQL Databases setting, to determine how to back up SQL databases:

Setting
Description
Per Account Only Only back up the databases for each account. This setting uses the mysqldump utility.
Entire MySQL Directory Back up all of the databases on the server. This backs up the entire /var/lib/mysql/ directory.
Per Account and Entire MySQL Directory Perform a comprehensive backup that copies all of the databases for each individual account, as well as all of the databases on the server.

Configure Backup Directory

The following settings allow you to specify where you wish to save your backups.

Warning:

  • We strongly recommend that you save your backups to a remote location in addition to a local destination.
  • If you do not select the Retain backups in the default backup directory setting and do not specify a destination in the Additional Destinations setting, the system will return the following error: Error: Nowhere to back up: no enabled destinations found and retaining local copies is disabled.
  • The backup process and the transfer process use separate queues. If each backup finishes much faster than each transfer, backup files can accumulate on the server and fill the hard drive.
  • Perform backups to NFS mount points at your own risk. If you choose a NFS mount point, you risk data loss if there is a network interruption or if you did not properly configure NFS.
  • To prevent performance degradation, the system automatically disables quotas on non-root filesystems that contain a backup destination.
Setting
Description
Default Backup Directory To change the default backup directory, enter the absolute path to the desired directory location.

Note:

By default, the system saves backup files locally, to the /backup/ directory.

Retain backups in the default backup directory. Select this checkbox to retain each account backup in the /backup/ directory after the backups transfer to another destination.

If you do not select this setting, your server deletes account backup files from the /backup/ directory only after the following events occur:

  • The system successfully transfers the backup file to at least one additional destination.
  • The system attempts, successfully or unsuccessfully, to transfer the backup file to all of your additional destinations.

Note:

This setting does not cause the system to remove system backup files, directories, or other files.

Mount Backup Drive as Needed. Select the Enable button to mount a backup drive. This setting requires a separate mount point and causesthe Backup Configuration process to check the /etc/fstab file for a backup mount.

  • If a mount exists with the same name as the staging directory, the Backup Configuration process mounts the drive and backs up the information to the mount.
  • After the backup process finishes, it dismounts the drive.

If you select the Disable button, the Backup Configuration process does not check the /etc/fstab file for a mount.

Additional Destinations

You can save your backups to additional destinations. Each additional destination may increase the amount of time that the backup process requires. If the process runs too long, it may interfere with the next backup process.

Notes:

  • To restore backups that exist in the additional destinations that you create, perform a remote restoration. For more information, read our Remote Restoration documentation.
  • If you use the Incremental backup type, you cannot add additional destinations.
  • To save your updated destination but not validate your changes, click Save Destination.
  • To automatically validate your information after you save your changes, click Save and Validate Destination.

Select a destination type from the menu and click Create new destination. A new section for the selected destination type will appear.

Warning:

Only transfer system backup files over encrypted connections. The following destination types use encrypted connections:

  • Amazon S3™
  • SFTP
  • WebDAV with SSL Enabled

The Additional Local Directory type does not transfer your data from the server, so that method is secure.

Select a tab to view information for that destination type:

Setting
Description
Destination Name Enter a destination name for your backup file. This name appears in your destination table.
Transfer System Backups to Destination Select this checkbox to transfer system backups to an additional destination.

Warning:

Only transfer system backup files over encrypted connections.

Backup Directory Enter the directory path in which you wish to store backups.

Note:

This setting is optional.

Remote Host Enter the hostname or IP address of the remote server.

Important:

  • Do not include http://, https://, a trailing port, or path information in the address that you enter.
  • Do not use local hosts or loopback addresses.
Port Enter the port to use to communicate with the remote server. By default, SFTP destinations use port 22.
Remote Account Username Enter the username of the account on the remote server.
Authentication Type Select how you wish to authenticate to the remote server:

  • Key Authentication — Select this button to use key-based authentication. We strongly recommend that you use this method.
  • Password Authentication — Select this button to use password-based authentication.
Key Authentication Options If you selected the Key Authentication button for the Authentication Type setting, enter the following information:

  • The full path of the private key on this server, in the Private Key text box.

    Note:

    Click Generate a new key to generate a new private key.

  • The passphrase for this server, in the Passphrase text box.
Password Authentication Options If you selected the Password Authentication button for the Authentication Type setting, enter the password for the account on the remote server in the Remote Password text box. Unless you specify a new password, the server uses the existing password.
Timeout Enter the maximum amount of time in seconds that you want the server to wait for a response from the remote server before it generates errors.

  • You must enter a number between 30 and 300.
  • If the server does not respond in this time frame, it makes two additional attempts to contact the server.
  • If the server does not respond after those attempts, the system administrator receives an email that notes the failed attempts, and the system attempts a transfer again the next time that backups run.

Save Configuration

After you configure the desired settings, click Save Configuration at the bottom of the Backup Configuration interface.

To reset all of the settings in the Backup Configuration interface to the default settings, click Reset.

Run backups manually

To run a backup manually, run the following command:

/usr/local/cpanel/bin/backup

If the backup is up-to-date but you still wish to perform a backup, run the following command:

/usr/local/cpanel/bin/backup --force

To use a custom packaging script, perform the following steps:

  1. Copy the /usr/local/cpanel/scripts/pkgacct file and modify it.
  2. Store the modified pkgacct file in the /var/cpanel/lib/Whostmgr/Pkgacct/pkgacct directory.
  3. Run the /usr/local/cpanel/bin/backup command with the --allow-override flag.

Backup files and directories

When you select Backup Configuration Files, cPanel & WHM backs up the following files and directories:

cPanel & WHM backs up the following files and directories:

Files

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
/etc/exim.conf
/etc/exim.conf.local
/etc/exim.conf.localopts
/etc/namedb/named.conf
/etc/rc.conf
/etc/named.conf
/etc/proftpd.conf
/etc/localdomains
/etc/httpd/conf/httpd.conf
/etc/group
/etc/shadow
/etc/master.passwd
/etc/passwd
/etc/fstab
/etc/ips
/etc/ips.remotemail
/etc/ips.remotedns
/etc/reservedips
/etc/reservedipreasons
/etc/quota.conf
/etc/wwwacct.conf
/etc/remotedomains
/etc/rndc.conf
/etc/secondarymx
/etc/my.cnf
/root/.my.cnf
/usr/local/apache/conf/httpd.conf

Directories

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
/etc/namedb
/etc/valiases
/etc/proftpd
/etc/vdomainaliases
/etc/ssl
/etc/vfilters
/usr/local/frontpage
/usr/share/ssl
/usr/local/cpanel/3rdparty/mailman
/var/lib/rpm
/var/lib/named/chroot/var/named/master
/var/named
/var/cpanel
/var/spool/cron
/var/cron/tabs
/var/spool/fcron
/var/log/bandwidth
/var/ssl
/var/lib/mysql

To configure system backups to include custom files or directories, create a new file or directory in the /var/cpanel/backups/extrasdirectory (for example, /var/cpanel/backups/extras/etc). In that file, enter an absolute path to any files that you wish to back up (for example, /etc/example.conf).

Note:

The server administrator can edit the /etc/my.cnf file to change the MySQL data directory location. In such a case, WHM will back up the directory at its new location.

 

Additional documentation

  • Backup Restoration — Restore your backup with the Backup Restoration interface.
  • Remote Restoration — Restore backups in the Additional Destinations locations that you created.
  • System Backups — Upload system backups to your chosen backup destinations.

Note when install WHM & Cpanel

1/ Installing WHM & Cpanel and activate license:

A/ Installing:

# cd /home

# curl -o latest -L https://securedownloads.cpanel.net/latest

# sh latest

If there are errors during install Cpanel:

Please check the log file carefully at /var/log/cpanel-install.log

We can access to WHM via https://ipadress_of_yourserver:2087 after finishing Cpanel

And then do basic configuration.

B/ Activate license:

To activate license: /usr/local/cpanel/cpkeyclt

Login to https://store.cpanel.net/login/ and access to link View my license, finally point ip address which you want.

2/  Create account for Cpanel:

Access to WHM via https://ipadress_of_yourserver:2087

Account Funtions/Create account

Use account above to access to Cpanel via  https://ipadress_of_yourserver:2083

3/ Add permission to Database in Cpanel:

There are no tab “users” or “privilege” in phpmyAdmin which is included in WHM/Cpanel. (refer http://forums.cpanel.net/f354/phpmyadmin-users-tab-gone-367661.html).

Access to Cpanel via https://ipadress_of_yourserver:2083 and go to Mysql Database

And then create user who will control your database.

Back to WHM, go to Database Map Tool: select Cpanel user and then map Database users to Database names.

Access to Cpanel via https://ipadress_of_yourserver:2083 and go to Mysql Database

And then add user to database which you want, finally add “privilege”

Split and Join tar.gz file on Linux

One time, when we want to uploading a file, we are having difficulties because the file size is too large and our internet speed is so slow. Therefore, we must split our file into some small parts so we can upload it per small parts. How to do this?

First, we must compress the file with tarball archiver.

$ tar -cvvzf <archive-name>.tar.gz /path/to/folder

This command file archive our folder to *.tar.gz. We can use file instead of path to folder for the argument. Then we will split up our file archive into small parts.

$ split -b 1M <archive-name>.tar.gz “parts-prefix”

-b 1M will split the file into 1 Megabytes size of file.The “part-prefix” will give the prefix name of our parts of file.

Example:

We have a video file name video.avi that have size of 30 MB. We will split it into 5 MB per parts. We can do :

$ tar -cvvzf test.tar.gz  video.avi

$ split -v 5M test.tar.gz vid

This command will create the archive file name test.tar.gz. Then, it will split into (approximately) six parts of 5MB file. They have prefix “vid”, so the result will be vidaa, vidab, vidac, vidad, vidae, and vidaf. We can use number instead of letter on the suffix by adding -d option on the split command

$ split -v 5M -d test.tar.gz video.avi

to join this file, we can use cat command.

$ cat vid* > test.tar.gz

Cpanel & WHM ports

cPanel

cPanel 2082
cPanel – SSL 2083
WHM 2086
WHM – SSL 2087
Webmail 2095
Webmail – SSL 2096

Email

POP3 110
POP3 – SSL 995
IMAP 143
IMAP – SSL 993
SMTP 25
SMTP Alternate 26
SMTP Alternate 587
SMTP – SSL 465

Web

HTTP 80
SSL 443
FTP 21
FTPs 990
SFTP 22
SFTP Shared/Reseller Servers 2222
Webdisk 2077
Webdisk – SSL 2078
MySQL 3306
MSSQL 1433
SSH 22
SSH Shared/Reseller Servers 2222

Other

Plesk Control Panel 8880
Plesk Control Panel – SSL 8443
Plesk Linux Webmail N/A*
Plesk Windows Webmail (SmarterMail) 9998**
Virtuozzo 4643
DotNet Panel 9001
DotNet Panel Login 80
RDP (Remote Desktop Protocol) 4489

*Plesk Linux Webmail is available for access through port 80 via webmail.domain.com(replace domain.com with the target URL).

**SmarterMail may only be accessed without SSL, as any attempt to access SmarterMail via HTTPS will result in an error. Accessing email securely on a Windows Plesk server requires the use of a third party mail client through the standard POP3, IMAP, and SMTP SSL ports specified above.

Note: This page does not include all of the possible ports that may be opened, since it is a very huge list.

For security reasons, many unused ports are closed by default. In certain circumstances, we can open those ports for you. Please refer to our article onopening new ports for more information.

A Beginner’s Guide To LVM

1 Preliminary Note

This tutorial was inspired by two articles I read:
http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html
http://www.debian-administration.org/articles/410
These are great articles, but hard to understand if you’ve never worked with LVM before. That’s why I have created this Debian Etch VMware image that you can download and run in VMware Server or VMware Player (see http://www.howtoforge.com/import_vmware_images to learn how to do that).
I installed all tools we need during the course of this guide on the Debian Etch system (by running
apt-get install lvm2 dmsetup mdadm reiserfsprogs xfsprogs
2 LVM Layout

Basically LVM looks like this:

You have one or more physical volumes (/dev/sdb1 – /dev/sde1 in our example), and on these physical volumes you create one or more volume groups (e.g. fileserver), and in each volume group you can create one or more logical volumes. If you use multiple physical volumes, each logical volume can be bigger than one of the underlying physical volumes (but of course the sum of the logical volumes cannot exceed the total space offered by the physical volumes).
It is a good practice to not allocate the full space to logical volumes, but leave some space unused. That way you can enlarge one or more logical volumes later on if you feel the need for it.
In this example we will create a volume group called fileserver, and we will also create the logical volumes /dev/fileserver/share, /dev/fileserver/backup, and /dev/fileserver/media (which will use only half of the space offered by our physical volumes for now – that way we can switch to RAID1 later on (also described in this tutorial)).

3 Our First LVM Setup

Let’s find out about our hard disks:
fdisk -l

The output looks like this:
server1:~# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 18 144553+ 83 Linux
/dev/sda2 19 2450 19535040 83 Linux
/dev/sda4 2451 2610 1285200 82 Linux swap / Solaris

Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn’t contain a valid partition table
There are no partitions yet on /dev/sdb – /dev/sdf. We will create the partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 and leave /dev/sdf untouched for now. We act as if our hard disks had only 25GB of space instead of 80GB for now, therefore we assign 25GB to /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1:
fdisk /dev/sdb

server1:~# fdisk /dev/sdb

Command (m for help): <– n
Command action
e extended
p primary partition (1-4)
<– p
Partition number (1-4): <– 1
First cylinder (1-10443, default 1): <– <ENTER>
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-10443, default 10443): <– +25000M

Command (m for help): <– t
Selected partition 1
Hex code (type L to list codes): <– L
Hex code (type L to list codes): <– 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): <– w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

fdisk -l

again. The output should look like this:
server1:~# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 18 144553+ 83 Linux
/dev/sda2 19 2450 19535040 83 Linux
/dev/sda4 2451 2610 1285200 82 Linux swap / Solaris

Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 3040 24418768+ 8e Linux LVM

 

Now we prepare our new partitions for LVM:
pvcreate /dev/sdb1

server1:~# pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Physical volume “/dev/sdb1” successfully created

Let’s revert this last action for training purposes:
pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

server1:~# pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Labels on physical volume “/dev/sdb1” successfully wiped

Then run
pvcreate /dev/sdb1
server1:~# pvcreate /dev/sdb1
Physical volume “/dev/sdb1” successfully created

Now run
pvdisplay

to learn about the current state of your physical volumes:
server1:~# pvdisplay
— NEW Physical volume —
PV Name /dev/sdb1
VG Name
PV Size 23.29 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID G8lu2L-Hij1-NVde-sOKc-OoVI-fadg-Jd1vyU

Now let’s create our volume group fileserver and add /dev/sdb1 – /dev/sde1 to it:
vgcreate fileserver /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

server1:~# vgcreate fileserver /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Volume group “fileserver” successfully created
Let’s learn about our volume groups:
vgdisplay

server1:~# vgdisplay
— Volume group —
VG Name fileserver
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 93.14 GB
PE Size 4.00 MB
Total PE 23844
Alloc PE / Size 0 / 0
Free PE / Size 23844 / 93.14 GB
VG UUID 3Y1WVF-BLET-QkKs-Qnrs-SZxI-wrNO-dTqhFP
Another command to learn about our volume groups:
vgscan

server1:~# vgscan
Reading all physical volumes. This may take a while…
Found volume group “fileserver” using metadata type lvm2
For training purposes let’s rename our volumegroup fileserver into data:
vgrename fileserver data

server1:~# vgrename fileserver data
Volume group “fileserver” successfully renamed to “data”
Let’s run vgdisplay and vgscan again to see if the volume group has been renamed:
vgdisplay

server1:~# vgdisplay
— Volume group —
VG Name data
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 93.14 GB
PE Size 4.00 MB
Total PE 23844
Alloc PE / Size 0 / 0
Free PE / Size 23844 / 93.14 GB
VG UUID 3Y1WVF-BLET-QkKs-Qnrs-SZxI-wrNO-dTqhFP
vgscan

server1:~# vgscan
Reading all physical volumes. This may take a while…
Found volume group “data” using metadata type lvm2
Now let’s delete our volume group data:
vgremove data

server1:~# vgremove data
Volume group “data” successfully removed
vgdisplay

No output this time:
server1:~# vgdisplay
vgscan

server1:~# vgscan
Reading all physical volumes. This may take a while…
Let’s create our volume group fileserver again:
vgcreate fileserver /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

server1:~# vgcreate fileserver /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Volume group “fileserver” successfully created
Next we create our logical volumes share (40GB), backup (5GB), and media (1GB) in the volume group fileserver. Together they use a little less than 50% of the available space (that way we can make use of RAID1 later on):
lvcreate –name share –size 40G fileserver

server1:~# lvcreate –name share –size 40G fileserver
Logical volume “share” created
lvcreate –name backup –size 5G fileserver

server1:~# lvcreate –name backup –size 5G fileserver
Logical volume “backup” created
lvcreate –name media –size 1G fileserver

server1:~# lvcreate –name media –size 1G fileserver
Logical volume “media” created
Let’s get an overview of our logical volumes:
lvdisplay

server1:~# lvdisplay
— Logical volume —
LV Name /dev/fileserver/share
VG Name fileserver
LV UUID 280Mup-H9aa-sn0S-AXH3-04cP-V6p9-lfoGgJ
LV Write Access read/write
LV Status available
# open 0
LV Size 40.00 GB
Current LE 10240
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0

— Logical volume —
LV Name /dev/fileserver/backup
VG Name fileserver
LV UUID zZeuKg-Dazh-aZMC-Aa99-KUSt-J6ET-KRe0cD
LV Write Access read/write
LV Status available
# open 0
LV Size 5.00 GB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1

— Logical volume —
LV Name /dev/fileserver/media
VG Name fileserver
LV UUID usfvrv-BC92-3pFH-2NW0-2N3e-6ERQ-4Sj7YS
LV Write Access read/write
LV Status available
# open 0
LV Size 1.00 GB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:2
lvscan

server1:~# lvscan
ACTIVE ‘/dev/fileserver/share’ [40.00 GB] inherit
ACTIVE ‘/dev/fileserver/backup’ [5.00 GB] inherit
ACTIVE ‘/dev/fileserver/media’ [1.00 GB] inherit
For training purposes we rename our logical volume media into films:
lvrename fileserver media films
server1:~# lvrename fileserver media films
Renamed “media” to “films” in volume group “fileserver”
lvdisplay

server1:~# lvdisplay
— Logical volume —
LV Name /dev/fileserver/share
VG Name fileserver
LV UUID 280Mup-H9aa-sn0S-AXH3-04cP-V6p9-lfoGgJ
LV Write Access read/write
LV Status available
# open 0
LV Size 40.00 GB
Current LE 10240
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0

— Logical volume —
LV Name /dev/fileserver/backup
VG Name fileserver
LV UUID zZeuKg-Dazh-aZMC-Aa99-KUSt-J6ET-KRe0cD
LV Write Access read/write
LV Status available
# open 0
LV Size 5.00 GB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1

— Logical volume —
LV Name /dev/fileserver/films
VG Name fileserver
LV UUID usfvrv-BC92-3pFH-2NW0-2N3e-6ERQ-4Sj7YS
LV Write Access read/write
LV Status available
# open 0
LV Size 1.00 GB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:2
lvscan

server1:~# lvscan
ACTIVE ‘/dev/fileserver/share’ [40.00 GB] inherit
ACTIVE ‘/dev/fileserver/backup’ [5.00 GB] inherit
ACTIVE ‘/dev/fileserver/films’ [1.00 GB] inherit
Next let’s delete the logical volume films:
lvremove /dev/fileserver/films

server1:~# lvremove /dev/fileserver/films
Do you really want to remove active logical volume “films”? [y/n]: <– y
Logical volume “films” successfully removed
We create the logical volume media again:
lvcreate –name media –size 1G fileserver

server1:~# lvcreate –name media –size 1G fileserver
Logical volume “media” created
Now let’s enlarge media from 1GB to 1.5GB:
lvextend -L1.5G /dev/fileserver/media

server1:~# lvextend -L1.5G /dev/fileserver/media
Extending logical volume media to 1.50 GB
Logical volume media successfully resized
Let’s shrink it to 1GB again:
lvreduce -L1G /dev/fileserver/media

server1:~# lvreduce -L1G /dev/fileserver/media
WARNING: Reducing active logical volume to 1.00 GB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce media? [y/n]: <– y
Reducing logical volume media to 1.00 GB
Logical volume media successfully resized

Until now we have three logical volumes, but we don’t have any filesystems in them, and without a filesystem we can’t save anything in them. Therefore we create an ext3 filesystem in share, an xfs filesystem in backup, and a reiserfs filesystem in media:
mkfs.ext3 /dev/fileserver/share

server1:~# mkfs.ext3 /dev/fileserver/share
mke2fs 1.40-WIP (14-Nov-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
5242880 inodes, 10485760 blocks
524288 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
320 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
mkfs.xfs /dev/fileserver/backup

server1:~# mkfs.xfs /dev/fileserver/backup
meta-data=/dev/fileserver/backup isize=256 agcount=8, agsize=163840 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal log bsize=4096 blocks=2560, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
mkfs.reiserfs /dev/fileserver/media

server1:~# mkfs.reiserfs /dev/fileserver/media
mkfs.reiserfs 3.6.19 (2003 http://www.namesys.com)

A pair of credits:
Alexander Lyamin keeps our hardware running, and was very generous to our
project in many little ways.

Chris Mason wrote the journaling code for V3, which was enormously more useful
to users than just waiting until we could create a wandering log filesystem as
Hans would have unwisely done without him.
Jeff Mahoney optimized the bitmap scanning code for V3, and performed the big
endian cleanups.
Guessing about desired format.. Kernel 2.6.17-2-486 is running.
Format 3.6 with standard journal
Count of blocks on the device: 262144
Number of blocks consumed by mkreiserfs formatting process: 8219
Blocksize: 4096
Hash function used to sort names: “r5”
Journal Size 8193 blocks (first block 18)
Journal Max transaction length 1024
inode generation number: 0
UUID: 2bebf750-6e05-47b2-99b6-916fa7ea5398
ATTENTION: YOU SHOULD REBOOT AFTER FDISK!
ALL DATA WILL BE LOST ON ‘/dev/fileserver/media’!
Continue (y/n):y
Initializing journal – 0%….20%….40%….60%….80%….100%
Syncing..ok

Tell your friends to use a kernel based on 2.4.18 or later, and especially not a
kernel based on 2.4.9, when you use reiserFS. Have fun.

ReiserFS is successfully created on /dev/fileserver/media.
Now we are ready to mount our logical volumes. I want to mount share in /var/share, backup in /var/backup, and media in /var/media, therefore we must create these directories first:
mkdir /var/media /var/backup /var/share

Now we can mount our logical volumes:
mount /dev/fileserver/share /var/share
mount /dev/fileserver/backup /var/backup
mount /dev/fileserver/media /var/media

Now run
df -h

You should see your logical volumes in the output:
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 665M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 88K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-share
40G 177M 38G 1% /var/share
/dev/mapper/fileserver-backup
5.0G 144K 5.0G 1% /var/backup
/dev/mapper/fileserver-media
1.0G 33M 992M 4% /var/media

Congratulations, you’ve just set up your first LVM system! You can now write to and read from /var/share, /var/backup, and /var/media as usual.
We have mounted our logical volumes manually, but of course we’d like to have them mounted automatically when the system boots. Therefore we modify /etc/fstab:
mv /etc/fstab /etc/fstab_orig
cat /dev/null > /etc/fstab

vi /etc/fstab

Put the following into it:
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/sda2 / ext3 defaults,errors=remount-ro 0 1
/dev/sda1 /boot ext3 defaults 0 2
/dev/hdc /media/cdrom0 udf,iso9660 user,noauto 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto 0 0
/dev/fileserver/share /var/share ext3 rw,noatime 0 0
/dev/fileserver/backup /var/backup xfs rw,noatime 0 0
/dev/fileserver/media /var/media reiserfs rw,noatime 0 0
If you compare it to our backup of the original file, /etc/fstab_orig, you will notice that we added the lines:
/dev/fileserver/share /var/share ext3 rw,noatime 0 0
/dev/fileserver/backup /var/backup xfs rw,noatime 0 0
/dev/fileserver/media /var/media reiserfs rw,noatime 0 0
Now we reboot the system:
shutdown -r now

After the system has come up again, run
df -h

again. It should still show our logical volumes in the output:
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 665M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 88K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-share
40G 177M 38G 1% /var/share
/dev/mapper/fileserver-backup
5.0G 144K 5.0G 1% /var/backup
/dev/mapper/fileserver-media
1.0G 33M 992M 4% /var/media

 

Editing fstab to automount partitions at startup

auto mounting partitions is very easy in linuxmint with the disk utility which have a nice gui explaining everyting.

but now i am going to show you a staright forward process of automonting partitions by editing /etc/fstabfile.

this tutorial is not solely for automounting but how to edit fstab efficiently and gaining some knowledge about it.

steps:

1. sudo gedit /etc/fstab

2. now the fstab file is open in gedit. you need to add an entry for the partition to automount it at startup.

the format of a new entry is like this:

file_system   mount_point   type  options     dump  pass

you will see this in the file and you need to add your new entry under this line.

brief explanation of the above format:

1.file_system = your device id.

use this:

/dev/sdax ( you should check it with sudo fdisk -l)

it may be /dev/sdbx or /dev/sdcx if you have more than one disks connected.

2. mount_point =where you want to mount your partition.

use this:

/media/user/label  

here user is your user name, label is “software”, “movies” or whatever label your partiton have.

3. type=fat32,ntfs, ntfs-3g,ext2,ext4 or whatever your partition type is.

4. options =mount options for the partition(explained later).

5. dump=Enable or disable backing up of the device/partition .usually set to 0, which disables it.

6. pass =Controls the order in which fsck checks the device/partition for errors at boot time. The root device should be 1. Other partitions should be 2, or 0 to disable checking.

so for auto mounting case the above format reduces to:

/dev/sdax /media/user/label  type  options           0  0

(you can check the type with sudo fdisk -l)

the options field:

  • sync/async – All I/O to the file system should be done synchronously/asynchronously.
  • auto/noauto – The filesystem will be mounted automatically at startup/The filesystem will NOT be automatically mounted at startup.
  • dev/nodev – Interpret/Do not interpret character or block special devices on the file system.
  • exec / noexec – Permit/Prevent the execution of binaries from the filesystem.
  • suid/nosuid – Permit/Block the operation of suid, and sgid bits.
  • ro/rw – Mount read-only/Mount read-write.
  • user/nouser – Permit any user to mount the filesystem. (This automatically implies noexec, nosuid,nodev unless overridden) / Only permit root to mount the filesystem. This is also a default setting.
  • defaults – Use default settings. Equivalent to rw, suid, dev, exec, auto, nouser, async.
  • _netdev – this is a network device, mount it after bringing up the network. Only valid with fstype nfs.

now the final format reduces to (for auto mount):

/dev/sdax /media/user/label  type     defaults       0  0  

for ntfs

/dev/sdax /media/user/label   ntfs  defaults       0  0  

for ext4

/dev/sdax /media/user/label   ext4  defaults       0  0  

etc…..

you can change defaults by your own configuration, like

/dev/sdax /media/user/label   ext4  rw,suid,dev,noexec,auto,user,async      0  0

etc…

you need to add entry for each partiton you want to auto mount.

Register CloudLinux and KernelCare

1/ Registering CloudLinux Server:

To register your server with CloudLinux Network using activation key run:

$ yum install rhn-setup –enablerepo=cloudlinux-base

$ /usr/sbin/rhnreg_ks –activationkey=<activation key> –force

Where activation key is like 1231-2b48feedf5b5a0e0609ae028d9275c93

If you have IP based license, use clnreg_ks command:

$ yum install rhn-setup –enablerepo=cloudlinux-base
$ /usr/sbin/clnreg_ks –force

2/ Installation of KernelCare:

KernelCare is compatible with 64bit versions of RHEL/CentOS 5.x, 6.x and 7.x, CloudLinux 5 & 6.x and Virtuozzo/PCS/OpenVZ 2.6.32, and Debian 6 & 7 kernels. List of compatible kernels can be found on the following link: http://patches.kernelcare.com/.

To install KernelCare on RPM based system, run:

rpm -i https://downloads.kernelcare.com/kernelcare-latest.x86_64.rpm

To install KernelCare on Debian system, run:

$ wget https://downloads.kernelcare.com/kernelcare-latest.deb

$ dpkg -i kernelcare-latest.deb

 

If you are using IP based license, nothing else needs to be done. If you are using key based license, run:

 

$ /usr/bin/kcarectl –register KEY

To check if patches applied run:

$ /usr/bin/kcarectl –info

The software will automatically check for new patches every 4 hours.

If you would like to run update manually:

$ /usr/bin/kcarectl –update

More information can be found on the following link: http://www.kernelcare.com/faq.php

How To Use Systemctl to Manage Systemd Services and Units

Introduction

Systemd is an init system and system manager that is widely becoming the new standard for Linux machines. While there are considerable opinions about whether systemd is an improvement over the traditional SysV init systems it is replacing, the majority of distributions plan to adopt it or have already done so.

Due to its heavy adoption, familiarizing yourself with systemd is well worth the trouble, as it will make administrating these servers considerably easier. Learning about and utilizing the tools and daemons that comprise systemd will help you better appreciate the power, flexibility, and capabilities it provides, or at least help you to do your job with minimal hassle.

In this guide, we will be discussing the systemctl command, which is the central management tool for controlling the init system. We will cover how to manage services, check statuses, change system states, and work with the configuration files.

Service Management

The fundamental purpose of an init system is to initialize the components that must be started after the Linux kernel is booted (traditionally known as “userland” components). The init system is also used to manage services and daemons for the server at any point while the system is running. With that in mind, we will start with some simple service management operations.

In systemd, the target of most actions are “units”, which are resources that systemd knows how to manage. Units are categorized by the type of resource they represent and they are defined with files known as unit files. The type of each unit can be inferred from the suffix on the end of the file.

For service management tasks, the target unit will be service units, which have unit files with a suffix of.service. However, for most service management commands, you can actually leave off the .servicesuffix, as systemd is smart enough to know that you probably want to operate on a service when using service management commands.

Starting and Stopping Services

To start a systemd service, executing instructions in the service’s unit file, use the start command. If you are running as a non-root user, you will have to use sudo since this will affect the state of the operating system:

sudo systemctl start application.service

As we mentioned above, systemd knows to look for *.service files for service management commands, so the command could just as easily be typed like this:

sudo systemctl start application

Although you may use the above format for general administration, for clarity, we will use the .servicesuffix for the remainder of the commands to be explicit about the target we are operating on.

To stop a currently running service, you can use the stop command instead:

sudo systemctl stop application.service

Restarting and Reloading

To restart a running service, you can use the restart command:

sudo systemctl restart application.service

If the application in question is able to reload its configuration files (without restarting), you can issue thereload command to initiate that process:

sudo systemctl reload application.service

If you are unsure whether the service has the functionality to reload its configuration, you can issue thereload-or-restart command. This will reload the configuration in-place if available. Otherwise, it will restart the service so the new configuration is picked up:

sudo systemctl reload-or-restart application.service

Enabling and Disabling Services

The above commands are useful for starting or stopping commands during the current session. To tellsystemd to start services automatically at boot, you must enable them.

To start a service at boot, use the enable command:

sudo systemctl enable application.service

This will create a symbolic link from the system’s copy of the service file (usually in /lib/systemd/systemor /etc/systemd/system) into the location on disk where systemd looks for autostart files (usually/etc/systemd/system/some_target.target.wants. We will go over what a target is later in this guide).

To disable the service from starting automatically, you can type:

sudo systemctl disable application.service

This will remove the symbolic link that indicated that the service should be started automatically.

Keep in mind that enabling a service does not start it in the current session. If you wish to start the service and enable it at boot, you will have to issue both the start and enable commands.

Checking the Status of Services

To check the status of a service on your system, you can use the status command:

systemctl status application.service

This will provide you with the service state, the cgroup hierarchy, and the first few log lines.

For instance, when checking the status of an Nginx server, you may see output like this:

● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2015-01-27 19:41:23 EST; 22h ago
 Main PID: 495 (nginx)
   CGroup: /system.slice/nginx.service
           ├─495 nginx: master process /usr/bin/nginx -g pid /run/nginx.pid; error_log stderr;
           └─496 nginx: worker process

Jan 27 19:41:23 desktop systemd[1]: Starting A high performance web server and a reverse proxy server...
Jan 27 19:41:23 desktop systemd[1]: Started A high performance web server and a reverse proxy server.

This gives you a nice overview of the current status of the application, notifying you of any problems and any actions that may be required.

There are also methods for checking for specific states. For instance, to check to see if a unit is currently active (running), you can use the is-active command:

systemctl is-active application.service

This will return the current unit state, which is usually active or inactive. The exit code will be “0” if it is active, making the result simpler to parse programatically.

To see if the unit is enabled, you can use the is-enabled command:

systemctl is-enabled application.service

This will output whether the service is enabled or disabled and will again set the exit code to “0” or “1” depending on the answer to the command question.

A third check is whether the unit is in a failed state. This indicates that there was a problem starting the unit in question:

systemctl is-failed application.service

This will return active if it is running properly or failed if an error occurred. If the unit was intentionally stopped, it may return unknown or inactive. An exit status of “0” indicates that a failure occurred and an exit status of “1” indicates any other status.

System State Overview

The commands so far have been useful for managing single services, but they are not very helpful for exploring the current state of the system. There are a number of systemctl commands that provide this information.

Listing Current Units

To see a list of all of the active units that systemd knows about, we can use the list-units command:

systemctl list-units

This will show you a list of all of the units that systemd currently has active on the system. The output will look something like this:

UNIT                                      LOAD   ACTIVE SUB     DESCRIPTION
atd.service                               loaded active running ATD daemon
avahi-daemon.service                      loaded active running Avahi mDNS/DNS-SD Stack
dbus.service                              loaded active running D-Bus System Message Bus
dcron.service                             loaded active running Periodic Command Scheduler
dkms.service                              loaded active exited  Dynamic Kernel Modules System
getty@tty1.service                        loaded active running Getty on tty1

. . .

The output has the following columns:

  • UNIT: The systemd unit name
  • LOAD: Whether the unit’s configuration has been parsed by systemd. The configuration of loaded units is kept in memory.
  • ACTIVE: A summary state about whether the unit is active. This is usually a fairly basic way to tell if the unit has started successfully or not.
  • SUB: This is a lower-level state that indicates more detailed information about the unit. This often varies by unit type, state, and the actual method in which the unit runs.
  • DESCRIPTION: A short textual description of what the unit is/does.

Since the list-units command shows only active units by default, all of the entries above will show “loaded” in the LOAD column and “active” in the ACTIVE column. This display is actually the default behavior of systemctl when called without additional commands, so you will see the same thing if you call systemctl with no arguments:

systemctl

We can tell systemctl to output different information by adding additional flags. For instance, to see all of the units that systemd has loaded (or attempted to load), regardless of whether they are currently active, you can use the --all flag, like this:

systemctl list-units --all

This will show any unit that systemd loaded or attempted to load, regardless of its current state on the system. Some units become inactive after running, and some units that systemd attempted to load may have not been found on disk.

You can use other flags to filter these results. For example, we can use the --state= flag to indicate the LOAD, ACTIVE, or SUB states that we wish to see. You will have to keep the --all flag so that systemctlallows non-active units to be displayed:

systemctl list-units --all --state=inactive

Another common filter is the --type= filter. We can tell systemctl to only display units of the type we are interested in. For example, to see only active service units, we can use:

systemctl list-units --type=service

Listing All Unit Files

The list-units command only displays units that systemd has attempted to parse and load into memory. Since systemd will only read units that it thinks it needs, this will not necessarily include all of the available units on the system. To see every available unit file within the systemd paths, including those that systemd has not attempted to load, you can use the list-unit-files command instead:

systemctl list-unit-files

Units are representations of resources that systemd knows about. Since systemd has not necessarily read all of the unit definitions in this view, it only presents information about the files themselves. The output has two columns: the unit file and the state.

UNIT FILE                                  STATE   
proc-sys-fs-binfmt_misc.automount          static  
dev-hugepages.mount                        static  
dev-mqueue.mount                           static  
proc-fs-nfsd.mount                         static  
proc-sys-fs-binfmt_misc.mount              static  
sys-fs-fuse-connections.mount              static  
sys-kernel-config.mount                    static  
sys-kernel-debug.mount                     static  
tmp.mount                                  static  
var-lib-nfs-rpc_pipefs.mount               static  
org.cups.cupsd.path                        enabled

. . .

The state will usually be “enabled”, “disabled”, “static”, or “masked”. In this context, static means that the unit file does not contain an “install” section, which is used to enable a unit. As such, these units cannot be enabled. Usually, this means that the unit performs a one-off action or is used only as a dependency of another unit and should not be run by itself.

We will cover what “masked” means momentarily.

Unit Management

So far, we have been working with services and displaying information about the unit and unit files thatsystemd knows about. However, we can find out more specific information about units using some additional commands.

Displaying a Unit File

To display the unit file that systemd has loaded into its system, you can use the cat command (this was added in systemd version 209). For instance, to see the unit file of the atd scheduling daemon, we could type:

systemctl cat atd.service
[Unit]
Description=ATD daemon

[Service]
Type=forking
ExecStart=/usr/bin/atd

[Install]
WantedBy=multi-user.target

The output is the unit file as known to the currently running systemd process. This can be important if you have modified unit files recently or if you are overriding certain options in a unit file fragment (we will cover this later).

Displaying Dependencies

To see a unit’s dependency tree, you can use the list-dependencies command:

systemctl list-dependencies sshd.service

This will display a hierarchy mapping the dependencies that must be dealt with in order to start the unit in question. Dependencies, in this context, include those units that are either required by or wanted by the units above it.

sshd.service
├─system.slice
└─basic.target
  ├─microcode.service
  ├─rhel-autorelabel-mark.service
  ├─rhel-autorelabel.service
  ├─rhel-configure.service
  ├─rhel-dmesg.service
  ├─rhel-loadmodules.service
  ├─paths.target
  ├─slices.target

. . .

The recursive dependencies are only displayed for .target units, which indicate system states. To recursively list all dependencies, include the --all flag.

To show reverse dependencies (units that depend on the specified unit), you can add the --reverse flag to the command. Other flags that are useful are the --before and --after flags, which can be used to show units that depend on the specified unit starting before and after themselves, respectively.

Checking Unit Properties

To see the low-level properties of a unit, you can use the show command. This will display a list of properties that are set for the specified unit using a key=value format:

systemctl show sshd.service
Id=sshd.service
Names=sshd.service
Requires=basic.target
Wants=system.slice
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=syslog.target network.target auditd.service systemd-journald.socket basic.target system.slice
Description=OpenSSH server daemon

. . .

If you want to display a single property, you can pass the -p flag with the property name. For instance, to see the conflicts that the sshd.service unit has, you can type:

systemctl show sshd.service -p Conflicts
Conflicts=shutdown.target

Masking and Unmasking Units

We saw in the service management section how to stop or disable a service, but systemd also has the ability to mark a unit as completely unstartable, automatically or manually, by linking it to /dev/null. This is called masking the unit, and is possible with the mask command:

sudo systemctl mask nginx.service

This will prevent the Nginx service from being started, automatically or manually, for as long as it is masked.

If you check the list-unit-files, you will see the service is now listed as masked:

systemctl list-unit-files
. . .

kmod-static-nodes.service              static  
ldconfig.service                       static  
mandb.service                          static  
messagebus.service                     static  
nginx.service                          masked
quotaon.service                        static  
rc-local.service                       static  
rdisc.service                          disabled
rescue.service                         static

. . .

If you attempt to start the service, you will see a message like this:

sudo systemctl start nginx.service
Failed to start nginx.service: Unit nginx.service is masked.

To unmask a unit, making it available for use again, simply use the unmask command:

sudo systemctl unmask nginx.service

This will return the unit to its previous state, allowing it to be started or enabled.

Editing Unit Files

While the specific format for unit files is outside of the scope of this tutorial, systemctl provides builtin mechanisms for editing and modifying unit files if you need to make adjustments. This functionality was added in systemd version 218.

The edit command, by default, will open a unit file snippet for the unit in question:

sudo systemctl edit nginx.service

This will be a blank file that can be used to override or add directives to the unit definition. A directory will be created within the /etc/systemd/system directory which contains the name of the unit with .dappended. For instance, for the nginx.service, a directory called nginx.service.d will be created.

Within this directory, a snippet will be created called override.conf. When the unit is loaded, systemdwill, in memory, merge the override snippet with the full unit file. The snippet’s directives will take precedence over those found in the original unit file.

If you wish to edit the full unit file instead of creating a snippet, you can pass the --full flag:

sudo systemctl edit --full nginx.service

This will load the current unit file into the editor, where it can be modified. When the editor exits, the changed file will be written to /etc/systemd/system, which will take precedence over the system’s unit definition (usually found somewhere in /lib/systemd/system).

To remove any additions you have made, either delete the unit’s .d configuration directory or the modified service file from /etc/systemd/system. For instance, to remove a snippet, we could type:

sudo rm -r /etc/systemd/system/nginx.service.d

To remove a full modified unit file, we would type:

sudo rm /etc/systemd/system/nginx.service

After deleting the file or directory, you should reload the systemd process so that it no longer attempts to reference these files and reverts back to using the system copies. You can do this by typing:

sudo systemctl daemon-reload

Adjusting the System State (Runlevel) with Targets

Targets are special unit files that describe a system state or synchronization point. Like other units, the files that define targets can be identified by their suffix, which in this case is .target. Targets do not do much themselves, but are instead used to group other units together.

This can be used in order to bring the system to certain states, much like other init systems use runlevels. They are used as a reference for when certain functions are available, allowing you to specify the desired state instead of the individual units needed to produce that state.

For instance, there is a swap.target that is used to indicate that swap is ready for use. Units that are part of this process can sync with this target by indicating in their configuration that they are WantedBy= orRequiredBy= the swap.target. Units that require swap to be available can specify this condition using the Wants=, Requires=, and After= specifications to indicate the nature of their relationship.

Getting and Setting the Default Target

The systemd process has a default target that it uses when booting the system. Satisfying the cascade of dependencies from that single target will bring the system into the desired state. To find the default target for your system, type:

systemctl get-default
multi-user.target

If you wish to set a different default target, you can use the set-default. For instance, if you have a graphical desktop installed and you wish for the system to boot into that by default, you can change your default target accordingly:

sudo systemctl set-default graphical.target

Listing Available Targets

You can get a list of the available targets on your system by typing:

systemctl list-unit-files --type=target

Unlike runlevels, multiple targets can be active at one time. An active target indicates that systemd has attempted to start all of the units tied to the target and has not tried to tear them down again. To see all of the active targets, type:

systemctl list-units --type=target

Isolating Targets

It is possible to start all of the units associated with a target and stop all units that are not part of the dependency tree. The command that we need to do this is called, appropriately, isolate. This is similar to changing the runlevel in other init systems.

For instance, if you are operating in a graphical environment with graphical.target active, you can shut down the graphical system and put the system into a multi-user command line state by isolating themulti-user.target. Since graphical.target depends on multi-user.target but not the other way around, all of the graphical units will be stopped.

You may wish to take a look at the dependencies of the target you are isolating before performing this procedure to ensure that you are not stopping vital services:

systemctl list-dependencies multi-user.target

When you are satisfied with the units that will be kept alive, you can isolate the target by typing:

sudo systemctl isolate multi-user.target

Using Shortcuts for Important Events

There are targets defined for important events like powering off or rebooting. However, systemctl also has some shortcuts that add a bit of additional functionality.

For instance, to put the system into rescue (single-user) mode, you can just use the rescue command instead of isolate rescue.target:

sudo systemctl rescue

This will provide the additional functionality of alerting all logged in users about the event.

To halt the system, you can use the halt command:

sudo systemctl halt

To initiate a full shutdown, you can use the poweroff command:

sudo systemctl poweroff

A restart can be started with the reboot command:

sudo systemctl reboot

These all alert logged in users that the event is occurring, something that simply running or isolating the target will not do. Note that most machines will link the shorter, more conventional commands for these operations so that they work properly with systemd.

For example, to reboot the system, you can usually type:

sudo reboot

Conclusion

By now, you should be familiar with some of the basic capabilities of the systemctl command that allow you to interact with and control your systemd instance. The systemctl utility will be your main point of interaction for service and system state management.

While systemctl operates mainly with the core systemd process, there are other components to thesystemd ecosystem that are controlled by other utilities. Other capabilities, like log management and user sessions are handled by separate daemons and management utilities (journald/journalctl andlogind/loginctl respectively). Taking time to become familiar with these other tools and daemons will make management an easier task.