настройка hp msa iscsi в esxi 6

VMware как подключить iscsi LUN c СХД к хосту Esxi.

VMware как подключить iscsi LUN c СХД к хосту Esxi.

Эта статья о том, как подключить iscsi LUN c СХД к хосту Esxi.

Итак, надеюсь у вас уже есть LUN на СХД, который мы будем подключать к хосту esxi. Как создать LUN на СХД NetApp я уже описывал.

Прежде чем подключить iscsi LUN к хосту Esxi, нужно создать на хосте software iscsi адаптер. Для этого на вкладке «Storage Adapters» хоста esxi жмем «Add», чобы добавить scsi адаптер.

После этого, выбираем созданный нами адаптер и жмем «Properties». На этой вкладке вы можете видеть WWN адаптера, который понадобится при настройке доступа к LUN на СХД.

На открывшемся экране, на вкладке «General Properties» можно указать понятный алиас для нашего инициатора.

На вкладке «Network Configuration» добавляем сетевой адаптер, который будет использоваться для передачи iscsi трафика. Жмем «Add»

и выбираем нужный интерфейс.

На вкладке «Dynamic Discovery» указываем IP адрес нашей СХД.

После этого необходимо пересканировать все storage адаптеры на хосте.

После этого вы увидите в списке доступных устройств ваш LUN.

Теперь необходимо добавить storage на хост. Для этого на вкладке «Storage» жмем «Add Storage».

В открывшемся окне выбираем «Disk/LUN» и жмем «Next».

Выбираем тип файловой системы.

На следующем экране жмем «Next».

Далее выбираем имя для нашего Storage.

Выбираем объем доступного пространства.

На заключительном экране жмем «Finish» и ждем завершения создания хранилища.

После этого вы можете видеть добавленное хранилище в списке доступных для размещения виртуальных машин.

Вот так можно добавить iscsi LUN к хосту Esxi.

Читайте также:  настройка defi control unit

источник

How to configure ESXi 6.5 for iSCSI Shared Storage

4sysops — The online community for SysAdmins and DevOps

Licensing ^

Note that you can use the free version of VMware ESXi to connect to iSCSI or NFS storage. I’ll demonstrate this below. In fact, no matter the license, any version of the VMware ESXi hypervisor can connect to shared storage.

You can then have several free ESXi hosts accessing the same datastore. However, because you don’t have a central management server (VMware vCenter server), you can only manage your VMs individually, host by host. And you don’t have the possibility to move VMs from one host to another using vMotion technology.

Requirements ^

  • At least one ESXi host with two physical NICs (management and storage traffic)
  • Shared storage offering the iSCSI protocol
  • Network switch between shared storage and the ESXi host
  • vSphere client software

We’ll be using the new HTML5 host client, which is different than the old Windows client. One advantage of the HTML5 host client is that you don’t need to install any software on your management computer because you connect via a web browser. You don’t need plugins, and you can just connect through the URL https://ip_of_ESXi/ui (replace «ip_of_ESXi» with your installation’s IP address). Note that if you’re managing a vCenter server-based VMware infrastructure, you’ll still need an Adobe Flash plugin.

Networking ^

First, we need to add a VMkernel network interface card (NIC) to our system so the ESXi host can connect to the remote storage device. To do so select Networking > VMkernel NICs > Add VMkernel NIC.

We will also add a new port group. Please name the port group accordingly. In my example, I named it simply iscsi. Note that our storage system is on another subnet.

Читайте также:  настройка falcon eye fe 008h

The result should look like the screenshot below:

Add a VMkernel NIC for the iSCSI connection

But that’s not all. Before we go further, we need to make sure that:

  • At the vSwitch level: Both physical NICs connected to the ESXi hosts are active.
  • At the port group level: A single NIC is set as active. The other one is set as unused.

Navigate to Networking > Virtual Switches > select vSwitch0 > Edit settings, expand the NIC teaming section and make sure both NICs are marked as active. If for some reason one of the NICs isn’t active, please select the NIC and click the Mark as active button.

NIC teaming at the vSwitch level

We also need to override the failover order at the port group level because by default, the port group inherits the setting from our vSwitch. First, navigate to Networking > Port groups TAB, select the iscsi port group > Edit settings > expand the NIC teaming section and then make sure to select Override failover order – Yes. Use the correct NIC for iSCSI storage traffic (vmnic1 in our case).

Override failover order on the iSCSI port group

Next, do the same for the Management Network and the VM network, and now, only select vmnic0.

Override failover order on the Management and VM network port groups

iSCSI adapter activation and configuration ^

After the process above, the next thing to do is to enable the software iSCSI adapter, which is disabled by default. To do this, select Storage > Adapters TAB > configure iSCSi.

Enable the software iSCSI Adapter

Then just go to the Dynamic targets section and click the Add dynamic target button.

Читайте также:  не получается сделать сброс настроек на андроид

Add a dynamic target to the iSCSI initiator

Now you can click the Save button. Next, go back to Storage > Adapters > Configure iSCSI. You should see the Static targets section populated.

Dynamic target population within the iSCSI initiator

If this is not the case, something went wrong during your configuration. Check your settings and make sure that everything is as explained above.

Then, while still in the Storage section, go to Datastores and create a new datastore. Click New datastore and create a new Virtual Machine File System (VMFS) datastore.

NIC teaming at the vSwitch level

Then enter a name to recognize this datastore later.

Create an iSCSI datastore Use a meaningful name

Select VMFS 6 from the drop-down menu. (By default, VMFS 5 is preselected.)

Create a VMFS 6 datastore

You’ll see a summary page. Click the Finish button.

Create a shared VMFS 6 datastore

You should see this new datastore populated among all the other datastores.

New VMFS 6 shared datastore

That’s all. We have configured an ESXi 6.5 host, connected it to a remote storage device via iSCSI and created a new VMFS 6 datastore.

If you plan to connect another ESXi host to the same storage device, you don’t have to create the datastore again. However, you’ll still need to configure the ESXi host in the same manner. Both hosts will be able to use the shared datastore for running virtual machines.

VMFS is a clustered file system that can manage multiple connections and is able to implement per-file locking. The latest VMFS 6 can handle up to 2048 different paths. You can run up to 2048 VMs on single datastore too. The maximum file size is 62 TB, and the maximum volume size can be up to 64 TB.

источник

Оцените статью
Adblock
detector