IBM Storwize V7000 Unified - description, connection and configuration of storage systems. IBM Storwize V7000 Unified - description, connection and configuration of storage systems Organizing a data warehouse

Today's post will focus on IBM Storwize V7000 Unified.

Let's look at connection and initialization issues, and also conduct a small performance test.

First, some background information on:

IBM Storwize V7000 Unified is a unified data storage system with the ability to simultaneously provide block and file access (SAN and NAS). File access via NFS/CIFS/FTP/HTTPS/SCP file protocols. As well as local and remote file replication. Well, plus all the usefulness and goodies inherent in the original Storwize V7000, and these are: Thin Provisioning (virtual allocation of disk space), FlashCopy (creating snapshots and clones of volumes), Easy Tier (multi-level storage), Data Migration (data migration), Real-time Performance, Metro and/or Global Mirror (remote replication), External Virtualization (virtualization of external storage systems), Real-time Compression (data compression).

The system consists of the V7000 itself and two file modules (a kind of system x server with specialized software installed on it) united in a cluster under the control of a single graphical interface, as they say at IBM - one system, one control, one unified solution.

Installation and initialization of the system is quite simple, the main thing is to make sure that the switching is correct and have a clear understanding of the procedure, and it also does not hurt to visit the IBM Storwize V7000 Unified Information Center (http://pic.dhe.ibm.com/infocenter/storwize/unified_ic /index.jsp?topic=%2Fcom.ibm.storwize.v7000.unified.132.doc%2Fmanpages%2Fdetachnw.html)

Example of IBM Storwize V7000 system switching

To initialize, perform the following procedure:


Click “Launch GUI” and the browser will open according to the ip specified in the Management IP item, where we will see the system initialization process. Upon completion, having specified all the necessary parameters, a GUI that is already familiar, but filled with new items, awaits us.

If something went wrong and a problem arose during initialization, you should pay attention to the file “satask_result.html” located on the flash drive with the utility; as a rule, it contains the number of the error due to which the failure occurred. Re-initialization is unlikely to succeed if at least one of the system elements has already been configured, so all settings must be reset. The reset is performed as follows: on the storage system itself, you need to go to the service graphical interface of the controllers (the IP address can be changed using the same InitTool utility, the default address is 192.168.70.121/122), switch node1 and node2 to service mode (“Enter” Service State”), then on the “Manage System” tab clear the system information of the selected node, then go to the “Configure Enclosure” tab and reset the system ID (check the “Reset the system ID” box and click “Modify”), this sequence of actions must be done for both controllers (selecting node1 and node2 on the “Home” tab in turn), after which you must reboot the storage system. To delete the configuration on file modules, you need to reinstall the system from the included disk, after executing the commands on the loaded modules, username/password (root/Passw0rd), then ( $ rm -rf /persist/*), and check that the file has been deleted ( $ ls -ahl /persist/*), insert the disk and reboot ( $reboot), installation will begin automatically after confirmation (press “Enter”).

There are several graphs of system performance with block access.

Host, OS Windows Server 2012, tested two local disks presented via FC, one 100Gb with RAID10 of 4 SSD 200Gb and the second 100Gb with a pool consisting of 3 RAID5, containing 19 SAS disks (300Gb 15k), two The RAID groups included seven disks, and the third group had five. Testing was carried out with the IOmeter program, two specifications “100%Random-8k-70%Read” were used - test in 8kb blocks, 100% random access, 70% read operations, 30% write. And “Max Throughput-50%Read” - test in 32kb blocks, 100% sequential access, 50% read and write operations. The queue depth had a value of 64.

I would like to show how easy it is to set up a data storage system from IBM. Special thanks to Dmitry K. from Orenburg for providing the screenshots, who was not lazy and captured the installation process.

The most basic diagram:

  • IBM Storwize v3700 storage system as standard with the ability to connect servers via iSCSI and SAS. Installed 4 disks of 600Gb
  • two IBM 3650 m4 servers, without local disks, with two single-port SAS HBA cards
  • cross-to-cross connection, fault-tolerant - each server HBA adapter is connected to its own storage controller

The task is as follows:

  1. Connect to storage system for management
  2. Update the firmware to support SAS connections
  3. Create array from disks, RAID level 10
  4. Since we have servers without hard drives, we create a separate LUN for each server to install the Windows server 2012 operating system
  5. We create one common LUN that will be accessible to both servers. It will be used to create a MS SQL 2012 cluster, more precisely for storing databases
  6. The task does not involve the use of virtualization

Let's start setting up

1

The storage system comes with a special flash drive; it is used for initial configuration, namely setting the administrator password and service IP address for connecting to the web interface. From the flash drive on the computer, run the InitTool.bat utility

2

Since we just took the storage system out of the box, select the Create a new system option

3

We set the IP address by which we will connect to the storage system.

4

System initialization process:

  1. We safely remove the device from the computer and take out the flash drive.
  2. We look at one of the storage system controllers. We will need to insert the flash drive into one of the connectors on the network management interfaces. But before that, you need to make sure that on the upper right side of the controller, three indicator lights are sending the correct semaphore signals, the left one is on, the middle one is blinking, the right one is off.
  3. After the flash drive is placed in a USB port (any). The right icon (exclamation) starts blinking. You need to wait until it stops going out, after which you can remove the flash drive and return it back to the computer to complete the wizard’s steps.

5

Through a browser (IE8 or Firefox 23+ is recommended) we go to the web interface.

The default login password for superuser is passw0rd (separated by zero)
Now is the time to update the firmware; to do this, go to the menu Settings -> General -> Upgrade Machine Code

The firmware was downloaded in advance from the official website ibm.com. In our case, this is Version 7.1.0.3 (build 80.3.1308121000). It includes an Upgrade test utility, first we load it onto the storage system, and then the firmware itself.

6

The storage system automatically detected 4 installed disks. The system assigned three of them to POOL, and assigned one to hot spare.

If there were more disks, it might make sense to leave such an automatic setting. In our case, it is better to repartition the disks differently.

7

Deleting the automatically created Pool

8

We get 4 free disks from which we will create RAID 10

9

Click Configure Storage, then select which RAID we want to create and how many disks will be used for it.

10

Set the name for the newly created Pool.

So as not to get confused in terms. We create a RAID or array from free disks, the resulting free space is Pool. Then we will cut the pool space itself into pieces, the so-called LUNs or Volume, and now they can be presented to servers (hosts).

11

Pool has been created

12

Create a new LUN in the pool

13

It’s not visible in the screenshot, but we are setting the LUN size

14

Thus, using the LUN creation wizard, we make 3 moons.

As planned, two 100Gb each for server operating systems. And one common size of 500Gb for creating an MS SQL 2012 cluster

15

Now you need to tell the storage system which servers (host) are connected to it. In the basic configuration there are only two connection options - iSCSI and SAS.

We have two servers that are connected to Storwize v3700 via SAS

16

At this step, we indicate to the storage system that our first server is connected to it with two SAS cables, which in the server are plugged into two SAS HBA cards with identifiers (16 digits)

Thus, we add both servers, each with two identifiers.

17

We present LUNs to servers. In other words, we assign access rights.

In the screenshot, HOST_LUN_TOP is intended only for the first server, because its operating system will be installed on it. And the second server cannot see this LUN.
Unlike SQL_LUN, which must be accessible to both servers, because the MS SQL cluster databases will be located on it.

To configure and further manage the DS35xx series storage systems from IBM, the DS storage manager program is used, the latest version of which can be downloaded from the official website, of course, after registration. There are versions of the program for different operating systems, Linux, Windows, Mac, HPUX

Here, it’s a good idea to download the latest firmware updates for storage system controllers. Otherwise, the storage system may not see disks or HBA adapters in the servers or other related problems may arise.

I don’t know why, but many people have problems finding and downloading downloadable files on the IBM website. Go to Ibm.com -> Support and Downloads -> Fixes, updates and drivers -> Quick find-> in the search bar "DS3500 (DS3512, DS3524)" -> View DS3500 (DS3512,DS3524) downloads. The IBM portal does not always work correctly, so if it does not work, try a different browser.

Firmware for the controller looks like this

Files for downloading DS storage manager, so



After installing and launching the program, you are prompted to select a method for finding the storage system. Automatic scans the network and looks for a connected DS35xx; in Manual you need to manually enter the IP addresses of both controllers of our storage system. For convenience, the default management interface addresses are written on the storage system itself under the ports. If DHCP is running on the network, then addresses will be obtained automatically.



We see that for fault tolerance, two management ports are built into each controller, but, usually, the first ports of each controller are used for management.


Connection diagram

Before you start setting up, you need to imagine what you want to get in the end; if there is no understanding, then you shouldn’t start. Let's make the simplest scheme, connect two servers to the storage system according to the scheme.


Each server has two SAS HBA adapters, for those who don’t know, this is just a PCI-E card with a SAS input. Two HBAs are installed for fault tolerance; if one of the controllers in the storage system fails, work will continue through the other. By the same logic, the system is protected from problems with the SAS cable or HBA adapter in the server.

Setup. Logics.

We have a storage system with disks in it. First, we need to assemble some kind of RAID (array) from disks, then create a logical volume (LUN) on this RAID, then present this volume to servers (mapping) so that they can see it and be able to work with it. This is the logic.

Now, in order. I will perform all manipulations in a simulator, which can be downloaded on the official IBM storage website. The interface is not exactly the same as what you will see on a real DS3524 or DS3512
1.. We previously selected the automatic method of searching for a storage system, the system found and connected it, the storage system is displayed in the console.

2.. Right-click on the storage system and select Manage to begin configuration.

3.. The wizard opens in a new window, but... I want to show a universal sequence of actions, close it.

4.. In the Logical/Physical View tab we see unallocated disk space. There are two types of disks in the simulated storage system; we will configure the usual SATA ones. First we create an Array (RAID)



6.. Set a NAME to our array


7.. We choose which RAID we want to get. We don’t see RAID 10, to create it you need to select RAID 1
8.. And then the wizard explains that if you create RAID 1 from four or more disks, then 10 RAID will be automatically created (or 1+0, the same thing)
9.. Choosing to create a RAID of 38 disks

10.. After creation, the volume creation wizard (LUN) automatically starts, it can also be launched from the console, as in the 4th step, only you need to select the previously created array.

11.. You need to indicate the size of the LUN, in my case 8 Tb (total free 17.6 Tb), and come up with a name for the volume
12.. An important point: if we know which OS will be installed on this LUN, then we need to specify it. For VMware there is also a line, for XenServer Linux is selected. But for some reason I don’t have these lines in the simulator
13.. After creating the Array and LUN, we see them in the console
14.. Now you need to go to another tab and give access to this LUN server. We see that the Default Group has been created by default and LUN1 is available to this group. We just need to add our server (first one, then the other) to this group so that they can connect to LUN1.

15.. Right-click on Default Group, Define -> Host

16.. Each of our servers has two SAS HBAs, and it is through them that the connection to the storage system occurs. The storage system can identify the server precisely by HBA adapters, or more precisely, by their unique “identifier”.

Set the host name (I have ESX1). We select two “identifiers” that belong to the server we are connecting. You can see what identifiers the server has by connecting to the ESXi host directly through vSphere Client or through vCenter Server. There, look in the “storage adapters” section.

Move two “identifiers” from the left column to the right. Then select each “identifier” and click on the Edit button to add a description to it. This procedure was invented in order not to get confused in a large number of identifiers.

In my simulator, there are some zeros instead of unique “identifiers”, don’t pay attention, everything will be as it should be.

17.. Now select the host operating system, if VMware then select VMware

18.. After this, you will see your Host in the console and due to the fact that it is in the Default Group, LUN1 will be available to it.

Conclusion. It turned out to be a long article, in practice everything happens much faster, you just need to click through all the steps a couple of times and the process of connecting storage systems from IBM will no longer cause problems.

Setting up an iSCSI connection is a little more complicated. I advise you to choose either SAS or FC.

Clusters allow you to scale your configuration IBM® WebSphere Portal. Clusters also provide high availability for J2EE applications because in the event of a failure, requests are automatically forwarded to healthy servers. A cluster can be configured in various ways: horizontal, vertical, multiple and dynamic.

The following illustration shows a horizontal cluster configuration in which WebSphere Portal installed on multiple servers or in multiple profiles on one physical server. A multi-server configuration reduces the number of individual failures, but requires additional software such as servers. A multi-profile configuration also reduces the number of individual failures. It requires less additional hardware than a multi-server configuration, but additional hardware, such as additional memory, may still be required. The deployment administrator manages the cell for the horizontal cluster nodes.

To leave the hardware unchanged, you can also configure virtual cluster elements on a single node. Typically, large portal clusters provide both horizontal and vertical scaling. Example: There may be four portal nodes, each containing five cluster members, for a total of twenty cluster members.

In response to customer feedback, instructions are provided for configuring WebSphere Portal for each operating system. Select your operating system to begin the process.

  1. Preparing the IBM i operating system in a cluster environment
    See information about setting up your operating system to work with IBM WebSphere Portal. If you install other components, additional steps may be required; Please review the documentation for these components.
  2. Prepare the primary node on IBM i
    Before creating a cluster environment, you must install IBM WebSphere Portal on the primary node and then configure the database and network deployment manager.
  3. Create and add a new Deployment Manager profile on IBM i
    In a production environment, Deployment Manager must be installed on a remote server, not on the same server as IBM WebSphere Portal. To create a remote Deployment Manager profile, use the Profile Management Tool or the manageprofiles command. In a test or development environment, Deployment Manager can be installed on your local system using IBM Installation Manager. If you are installing a remote Deployment Manager profile, follow the steps to create and add a Deployment Manager profile. Skip these steps if you are installing a local Deployment Manager profile using Installation Manager on the primary node.
  4. Creating a cluster on IBM i
    After installation IBM WebSphere Portal On the primary node, configuring the remote database, and preparing the primary node to communicate with Deployment Manager, you can create a static cluster to handle switchover requests.
  5. Preparing the web server when the portal is installed on IBM i in a clustered environment
    Install and configure the web server module provided by IBM WebSphere Application Server, to configure the web server to interact with IBM WebSphere Portal.
  6. IBM i Cluster: Preparing user registries
    Install and configure an LDAP server as a user registry to store user information and identify users in a clustered production environment.

  7. Set up user registry protection in IBM WebSphere Portal to protect the server from unauthorized access. You can configure a standalone LDAP user registry or add LDAP or Database user registries to the default federated store. Once the user registry is configured, you can add scopes for virtual portals or a secondary database to store attributes that cannot be stored in the LDAP user registry.
  8. Provisioning additional cluster members on IBM i
    After installing and configuring the main node, you can create additional nodes. You can install IBM WebSphere Portal on each node and then configure the node to access the database and user registry before adding it to the cluster.
  9. IBM i cluster: Fine-tuning servers
    Fine-tuning your servers plays an important role in ensuring that your WebSphere Portal environment performs as expected. WebSphere Portal is not initially tuned for production, so for optimal performance, review and follow the procedures in the IBM WebSphere Portal Tuning Guide. If the fine-tuning guide is not available for the current release of WebSphere Portal, use the guide for the previous release.
  10. Setting up search in an IBM i cluster
    IBM WebSphere Portal provides two different search options. You can use both search capabilities in a clustered environment.
  11. Setting up multiple clusters on IBM i
    Additional cell clusters are created in much the same way as the first, with a few exceptions. In fact, the new profile will be intended to be used as the main profile, according to cluster terminology IBM WebSphere Portal, and will be used as the basis for the new definition of a cluster. This replicates the process of creating the first cluster in a cell. During the distribution process, if there are any applications on this new node in the cell (because they are used by the first cluster), the Deployment Manager will not allow them to be added. After distribution, applications that already exist in the cell are not exposed to the WebSphere_Portal server on the newly added node; therefore, existing applications should be relinked to the new distributed server to restore the application list. Thus, depending on the configuration of the new profile, some applications will be shared between other existing clusters, and some will be unique to this new profile.
  12. Sharing database domains between clusters on IBM i
    If your production environment consists of multiple clusters in a single cell and multiple clusters in different cells, you can grant access to database domains to all clusters to support redundancy and failover. Data IBM WebSphere Portal stored in multiple database domains with varying availability requirements depending on the configuration of the production environment. If there are several production lines, each of which is implemented as a server cluster, the use of common database domains guarantees automatic synchronization of data between production lines.

In this article, we will look at the issue of installation and configuration on CentOS 7. In this manual, the installation of the trial version will be demonstrated. WebSphere, but it's no different from the full version, so it doesn't matter.

So, let's go!

1) Preparation and configuration of the OS

In our work we will use the new CentOS 7. Surprisingly, out of the box it needs a lot of finishing to work, so be prepared for this. So, install the minimal version without graphics and let's go. Through the interface - immediately set up a network so that there is Internet... this will make your life much easier :)

Let's install the basic software... which for some reason is not included in the package:

Yum install net-tools nano wget

Now let's check our hostname and we'll fix it hosts(edit as you like):

Nano /etc/hostname nano /etc/hosts

Ifconfig -a

To fix this, you first need to fix it a little grub:

Nano /etc/default/grub

At the end of the line “ GRUB_CMDLINE_LINUX" need to add " net.ifnames=0 biosdevname=0“. You'll get something like this (not necessarily 1 in 1):

GRUB_CMDLINE_LINUX="rd.lvm.lv=rootvg/usrlv rd.lvm.lv=rootvg/swaplv crashkernel=auto vconsole.keymap=usrd.lvm.lv=rootvg/rootlv vconsole.font=latarcyrheb-sun16 rhgb quiet net.ifnames=0 biosdevname=0"

We rename our network interface to normal, classic “ eth0” and let’s rebuild:

Mv /etc/sysconfig/network-scripts/ifcfg-ens32 /etc/sysconfig/network-scripts/ifcfg-eth0 reboot

Setting up the network:

Nano /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" ONBOOT=yes BOOTPROTO=static IPADDR=1.1.4.185 NETMASK=255.255.248.0 GATEWAY=1.1.1.9 DNS1=1.1.1.10 DNS2=1.1.1.90

Disable the extra one Network manager and let's rebuild:

Systemctl stop NetworkManager systemctl disable NetworkManager reboot

We check whether the system is designated as a thread IPv6:

lsmod | grep -i ipv6

If the messages contain references to IPv6, but it will be, then we proceed to disabling it:

Nano /etc/default/grub

At the beginning of the line “ GRUB_CMDLINE_LINUX" need to add " ipv6.disable=1“. You'll get something like this:

GRUB_CMDLINE_LINUX="ipv6.disable=1 rd.lvm.lv=rootvg/usrlv...

Create a new config and save the result:

Grub2-mkconfig -o /boot/grub2/grub.cfg

Reboot:

Let's check again and make sure everything is fine:

lsmod | grep -i ipv6

Adding to the system EPEL(all sorts of packages “burdened” with licenses) repository for CentOS 7:

Wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm rpm -ivh epel-release-7-2.noarch.rpm yum repolist

The new OS uses a “master” daemon that controls the other daemons. This systemd, which was introduced instead of outdated initialization scripts init.d. A new firewall is also used, firewalld instead of iptables. Let's check its operation and open the ports we need (9080 and 9443):

Systemctl status firewalld firewall-cmd --permanent --zone=public --add-port=9080/tcp firewall-cmd --permanent --zone=public --add-port=9443/tcp systemctl restart firewalld

As a matter of fact, this is where the OS configuration ends and we proceed directly to the installation IBM WebSphere Application Server Liberty Profile 8.5.5

2) Install WebSphere

We will need an account IBM. After regular registration, you can download any software (for development purposes, it is also called trial version).

They do not allow you to download the software directly. We download universal Installation Manager, and then through it we can download the software we need. Archive content BASETRIAL.agent.installer.linux.gtk.x86_64.zip unpack it into the was folder and then upload it to the server in /root

We grant permissions and start the installation:

Chmod -R 775 /root/was cd was ./installc -c

First thing, Installation Manager will ask us to enter our login and password for the IBM account. Press p and enter your credentials:

We select only the following items for installation (installation manager, websphere liberty and java sdk for it):

But we won’t install fixes. They are not required for installation, besides they are buggy and install with an error:

Final message. What is installed and where:

After that, we wait. How much to wait? Depends on your Internet speed and server load IBM. You will need to download about 500 MB, or even more. Be patient... What's going on? The installer connects its repositories and downloads the ordered software from it. Everything is beautiful.

The successful installation message looks like this:

Theoretically, it is also possible to install all this through response files, without dialogs. But this option also requires already installed Installation Manager, so in our case this is not relevant..

So, that's it! we installed IBM WebSphere Application Server Liberty Profile 8.5.5 and necessary for its operation Java! Congratulations! We will now look at what we can do next.

3) WebSphere setup

a) Starting WebSphere

Let's create our test server:

/opt/IBM/WebSphere/Liberty/bin/server create PROJECT

Created. The folder appears: /opt/IBM/WebSphere/Liberty/usr/servers/ PROJECT All settings and future modules will be located in it. To launch this joint venture, you need to add the line host=’1.1.4.185′ (with our IP), above httpPort=’9080′ (this is here: /opt/IBM/WebSphere/Liberty/usr/servers/PROJECT/ server.xml ). An example of such a config:

Let's launch:

/opt/IBM/WebSphere/Liberty/bin/server start PROJECT

Going to the address http://1.1.4.185:9080, we will see the following:

This means that everything is fine and the websphere has started.

b) Installation of the administration module

This item is optional. But with the administration module it is more convenient to work with the web sphere. Through it, you can stop and start modules individually, without having to stop the entire server.

So, install this module:

/opt/IBM/WebSphere/Liberty/bin/featureManager install adminCenter-1.0 --when-file-exists=ignore

To log in to the admin area as an admin, use the account: admin/password. And under the user: nonadmin/nonadminpwd.

Its login address is: http://1.1.4.185:9080/adminCenter/ The admin panel looks like this:



All! The administration module is installed.

c) Installing an extension module

Also, you need to install on the Websphere extended packages (an extended set of libraries and binaries), this is done extremely simply:

/opt/IBM/WebSphere/Liberty/bin/featureManager install extendedPackage-1.0

d) Installation of modules

We come to the most interesting part. Installing modules in Liberty. How to do this? There are 2 ways, through the /opt/IBM/WebSphere/Liberty/usr/servers/PROJECT/ folder dropins and /opt/IBM/WebSphere/Liberty/usr/servers/PROJECT/ apps
From the catalog dropins modules are picked up and installed automatically. From the catalog apps– they must be manually registered in the server.xml config. An example of a config to which the module is connected via apps:

To run the SP not in the background and with logs, run the command:

/opt/IBM/WebSphere/Liberty/bin/server run PROJECT

e) Pros

Testing has verified that it is enough to copy the /opt/IBM folder to another server and everything will work out of the box. Very comfortably. Those. we can set up the joint venture we need in advance and supply the entire software package at once. And “Liberty Websphere” is very lightweight and starts/stops very quickly :)

Published