none
VM Locks up during snapshot

    Question

  • I have a 2012 R2 Hyper-V cluster that uses a Netapp SAN.  When the Netapp does a Hyper-V snapshot through Snapmanager for Hyper-V several of my Ubuntu 16 servers lock up.  I will post the syslog below.  Note, There are lots more 'Time has changed' messages - I removed most for brevity.  I have all the recommended guest additions and kernels installed based on Microsoft's document on Hyper-v + Ubuntu.  I'd appreciate any help you can provide.  My VM uses a single disk, with an efi partition, a boot partition, and an lvm partition.  On top of lvm is ext4.

    Here is the syslog:

    Apr  6 05:01:23 UbuntuServer01 systemd[1]: apt-daily.timer: Adding 6h 8min 2.973386s random time.
    Apr  6 05:01:27 UbuntuServer01 Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  6 05:02:23 UbuntuServer01 systemd[1]: apt-daily.timer: Adding 4h 21min 27.763829s random time.
    Apr  6 05:02:23 UbuntuServer01 systemd[4427]: Time has been changed
    Apr  6 05:02:28 UbuntuServer01 systemd[1]: Time has been changed
    Apr  6 05:02:28 UbuntuServer01 systemd[1]: apt-daily.timer: Adding 7h 8min 12.733924s random time.
    Apr  6 05:02:28 UbuntuServer01 systemd[1]: Time has been changed
    Apr  6 05:02:28 UbuntuServer01 systemd[1]: apt-daily.timer: Adding 7h 8min 12.733924s random time.
    Apr  6 05:02:28 UbuntuServer01 systemd[4427]: Time has been changed
    Apr  6 05:02:33 UbuntuServer01 Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  6 05:02:33 UbuntuServer01 systemd[4427]: Time has been changed
    Apr  6 05:02:33 UbuntuServer01 systemd[1]: Time has been changed
    Apr  6 05:02:33 UbuntuServer01 systemd[1]: apt-daily.timer: Adding 6h 17min 27.101562s random time.
    Apr  6 05:05:01 UbuntuServer01 CRON[49003]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
    Apr  6 05:10:26 UbuntuServer01 Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  6 05:13:32 UbuntuServer01 Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  6 05:14:35 UbuntuServer01 Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  6 05:15:01 UbuntuServer01 CRON[49089]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
    Apr  6 05:17:01 UbuntuServer01 CRON[49096]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
    Apr  6 05:18:53 UbuntuServer01 puppet-agent[49131]: Applied catalog in 1.19 seconds
    Apr  6 05:19:25 UbuntuServer01 Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  6 05:21:55 UbuntuServer01 Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  6 05:22:58 UbuntuServer01 Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  6 05:24:03 UbuntuServer01 Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  6 05:24:17 UbuntuServer01 Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  6 05:24:17 UbuntuServer01 Hyper-V VSS: VSS: op=FREEZE: succeeded
    Apr  6 05:24:17 UbuntuServer01 systemd[4427]: Time has been changed
    Apr  6 05:24:17 UbuntuServer01 systemd[1]: Time has been changed
    Apr  6 05:24:17 UbuntuServer01 systemd[1]: apt-daily.timer: Adding 3h 29min 44.249146s random time.
    Apr  6 05:24:17 UbuntuServer01 kernel: [58022.658029] sd 0:0:0:0: [storvsc] Sense Key : Unit Attention [current]
    Apr  6 05:24:17 UbuntuServer01 kernel: [58022.658034] sd 0:0:0:0: [storvsc] Add. Sense: Changed operating definition
    Apr  6 05:24:17 UbuntuServer01 kernel: [58022.658052] sd 0:0:0:0: Warning! Received an indication that the operating parameters on this target have changed. The Linux SCSI layer does not automa
    Apr  6 05:24:17 UbuntuServer01 kernel: [58022.660120] sd 0:0:0:1: [storvsc] Sense Key : Unit Attention [current]
    Apr  6 05:24:17 UbuntuServer01 kernel: [58022.660123] sd 0:0:0:1: [storvsc] Add. Sense: Changed operating definition
    Apr  6 05:24:17 UbuntuServer01 kernel: [58022.660126] sd 0:0:0:1: Warning! Received an indication that the operating parameters on this target have changed. The Linux SCSI layer does not automa
    Apr  6 05:24:17 UbuntuServer01 Hyper-V VSS: VSS: op=THAW: succeeded
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.024329] scsi host0: scsi scan: INQUIRY result too short (5), using 36
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.024335] scsi 0:0:0:3: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.024566] scsi 0:0:0:4: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.024848] scsi 0:0:0:5: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.025081] scsi 0:0:0:6: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.025264] scsi 0:0:0:7: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.025954] scsi 0:0:1:1: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 4
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.035299] sd 0:0:0:3: [sdc] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.035304] sd 0:0:0:3: [sdc] 1 512-byte logical blocks: (512 B/512 B)
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.035305] sd 0:0:0:3: [sdc] 0-byte physical blocks
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.035382] sd 0:0:0:3: [sdc] Write Protect is off
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.035382] sd 0:0:0:3: [sdc] Mode Sense: 00 00 00 00
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.035562] sd 0:0:0:3: Attached scsi generic sg3 type 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.035595] sd 0:0:0:3: [sdc] Asking for cache data failed
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.035612] sd 0:0:0:3: [sdc] Assuming drive cache: write through
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.035856] sd 0:0:0:4: Attached scsi generic sg4 type 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036245] sd 0:0:0:4: [sdd] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036248] sd 0:0:0:4: [sdd] 1 512-byte logical blocks: (512 B/512 B)
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036250] sd 0:0:0:4: [sdd] 0-byte physical blocks
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036285] sd 0:0:0:5: Attached scsi generic sg5 type 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036391] sd 0:0:0:5: [sde] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036406] sd 0:0:0:4: [sdd] Write Protect is off
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036391] sd 0:0:0:5: [sde] 1 512-byte logical blocks: (512 B/512 B)
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036409] sd 0:0:0:4: [sdd] Mode Sense: 00 00 00 00
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036394] sd 0:0:0:5: [sde] 0-byte physical blocks
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036613] sd 0:0:0:4: [sdd] Asking for cache data failed
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036643] sd 0:0:0:4: [sdd] Assuming drive cache: write through
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036742] sd 0:0:0:6: Attached scsi generic sg6 type 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036776] sd 0:0:0:5: [sde] Write Protect is off
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036776] sd 0:0:0:5: [sde] Mode Sense: 00 00 00 00
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036827] sd 0:0:0:5: [sde] Asking for cache data failed
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.036852] sd 0:0:0:5: [sde] Assuming drive cache: write through
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.038101] sd 0:0:0:3: [sdc] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.038531] sd 0:0:0:5: [sde] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.040430] sd 0:0:0:4: [sdd] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.040496] sd 0:0:0:7: Attached scsi generic sg7 type 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.040840] sd 0:0:0:6: [sdf] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.040844] sd 0:0:0:6: [sdf] 1 512-byte logical blocks: (512 B/512 B)
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.040846] sd 0:0:0:6: [sdf] 0-byte physical blocks
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.041733] sd 0:0:0:6: [sdf] Write Protect is off
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.041733] sd 0:0:0:6: [sdf] Mode Sense: 00 00 00 00
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.042730] sd 0:0:1:1: Attached scsi generic sg8 type 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.042792] sd 0:0:0:6: [sdf] Asking for cache data failed
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.042798] sd 0:0:0:6: [sdf] Assuming drive cache: write through
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.042955] ldm_validate_partition_table(): Disk read failed.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.042969] Dev sdc: unable to read RDB block 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043017]  sdc: unable to read partition table
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043021] sdc: partition table beyond EOD, enabling native capacity
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043198] sd 0:0:0:7: [sdg] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043202] sd 0:0:0:7: [sdg] 1 512-byte logical blocks: (512 B/512 B)
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043204] sd 0:0:0:7: [sdg] 0-byte physical blocks
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043308] sd 0:0:0:7: [sdg] Write Protect is off
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043311] sd 0:0:0:7: [sdg] Mode Sense: 00 00 00 00
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043353] sd 0:0:0:7: [sdg] Asking for cache data failed
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043378] sd 0:0:0:7: [sdg] Assuming drive cache: write through
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043336] sd 0:0:0:6: [sdf] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.043803] sd 0:0:0:7: [sdg] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.048302] sd 0:0:1:1: [sdh] 41943040 512-byte logical blocks: (21.5 GB/20.0 GiB)
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.048305] sd 0:0:1:1: [sdh] 4096-byte physical blocks
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.049531] sd 0:0:1:1: [sdh] Write Protect is off
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.049534] sd 0:0:1:1: [sdh] Mode Sense: 0f 00 00 00
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.050535] sd 0:0:1:1: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.052527] sd 0:0:0:3: [sdc] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.052746] ldm_validate_partition_table(): Disk read failed.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.052760] Dev sdc: unable to read RDB block 0
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.052794]  sdc: unable to read partition table
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.052797] sdc: partition table beyond EOD, truncated
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.053033] sd 0:0:0:3: [sdc] Sector size 0 reported, assuming 512.
    Apr  6 05:24:18 UbuntuServer01 kernel: [58023.053180] sd 0:0:0:3: [sdc] Attached SCSI disk
    Apr  6 05:24:18 UbuntuServer01 systemd-udevd[49545]: Process '/lib/udev/hdparm' failed with exit code 6.
    Apr  6 05:25:02 UbuntuServer01 CRON[49755]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036038] INFO: task kworker/0:1:4335 blocked for more than 120 seconds.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036069]       Not tainted 4.4.0-70-generic #91-Ubuntu
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036090] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036119] kworker/0:1     D ffff880016ddbd38     0  4335      2 0x00000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036128] Workqueue: events storvsc_remove_lun [hv_storvsc]
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036130]  ffff880016ddbd38 ffff8800375a3860 ffff880003fdb800 ffff8800375a3800
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036132]  ffff880016ddc000 ffff880035604064 ffff8800375a3800 00000000ffffffff
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036134]  ffff880035604068 ffff880016ddbd50 ffffffff81838545 ffff880035604060
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036136] Call Trace:
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036141]  [<ffffffff81838545>] schedule+0x35/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036143]  [<ffffffff818387ee>] schedule_preempt_disabled+0xe/0x10
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036146]  [<ffffffff8183a429>] __mutex_lock_slowpath+0xb9/0x130
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036148]  [<ffffffff8183a4bf>] mutex_lock+0x1f/0x30
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036151]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036154]  [<ffffffffc00f5a10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036157]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036160]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036162]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036164]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036166]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036168]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036170]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036173] INFO: task kworker/0:2:44896 blocked for more than 120 seconds.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036199]       Not tainted 4.4.0-70-generic #91-Ubuntu
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036220] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036248] kworker/0:2     D ffff880015937c58     0 44896      2 0x00000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036252] Workqueue: events storvsc_remove_lun [hv_storvsc]
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036253]  ffff880015937c58 ffff880015937c48 ffff88003eb89c00 ffff88003566c600
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036255]  ffff880015938000 ffffffff81ed9a30 ffff8800482f8810 0000000000800020
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036257]  ffff8800335e3640 ffff880015937c70 ffffffff81838545 ffffffffffffffff
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036259] Call Trace:
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036261]  [<ffffffff81838545>] schedule+0x35/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036264]  [<ffffffff810a374e>] async_synchronize_cookie_domain+0x6e/0x150
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036267]  [<ffffffff810c4210>] ? wake_atomic_t_function+0x60/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036269]  [<ffffffff810a3848>] async_synchronize_full_domain+0x18/0x20
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036272]  [<ffffffff815cd7dd>] sd_remove+0x4d/0xc0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036275]  [<ffffffff8155d081>] __device_release_driver+0xa1/0x150
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036278]  [<ffffffff8155d153>] device_release_driver+0x23/0x30
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036280]  [<ffffffff8155c7a1>] bus_remove_device+0x101/0x170
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036282]  [<ffffffff81558909>] device_del+0x139/0x270
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036284]  [<ffffffff815c6903>] __scsi_remove_device+0xd3/0xe0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036286]  [<ffffffff815c6936>] scsi_remove_device+0x26/0x40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036288]  [<ffffffffc00f5a10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036290]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036292]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036294]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036296]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036297]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036299]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036301]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036303]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036305] INFO: task kworker/u128:2:48480 blocked for more than 120 seconds.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036332]       Not tainted 4.4.0-70-generic #91-Ubuntu
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036352] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036381] kworker/u128:2  D ffff88000775f848     0 48480      2 0x00000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036385] Workqueue: events_unbound async_run_entry_fn
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036386]  ffff88000775f848 ffff88003766ec60 ffff880035613800 ffff88004fa08000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036388]  ffff880007760000 ffff88003c416dc0 7fffffffffffffff ffffffff81838d40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036389]  ffff88000775f9a8 ffff88000775f860 ffffffff81838545 0000000000000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036391] Call Trace:
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036393]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036395]  [<ffffffff81838545>] schedule+0x35/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036397]  [<ffffffff8183b695>] schedule_timeout+0x1b5/0x270
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036399]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036401]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036403]  [<ffffffff81837a74>] io_schedule_timeout+0xa4/0x110
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036404]  [<ffffffff81838d5b>] bit_wait_io+0x1b/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036406]  [<ffffffff818388ed>] __wait_on_bit+0x5d/0x90
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036409]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036413]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036415]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036417]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036419]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036421]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036423]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036426]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036428]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036431]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036434]  [<ffffffff813de8a0>] ? add_part+0xe0/0xe0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036436]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036438]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036440]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036442]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036444]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036447]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036449]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036451]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036454]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036456]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036458]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036460]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036462]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036464]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036466]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036467]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036469]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036470]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036473]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036474]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036475] INFO: task kworker/u128:1:48907 blocked for more than 120 seconds.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036502]       Not tainted 4.4.0-70-generic #91-Ubuntu
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036523] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036551] kworker/u128:1  D ffff88004e7bf848     0 48907      2 0x00000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036555] Workqueue: events_unbound async_run_entry_fn
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036556]  ffff88004e7bf848 ffff88003766d148 ffff880035613800 ffff880003fdb800
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036558]  ffff88004e7c0000 ffff88003c416dc0 7fffffffffffffff ffffffff81838d40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036559]  ffff88004e7bf9a8 ffff88004e7bf860 ffffffff81838545 0000000000000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036561] Call Trace:
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036563]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036565]  [<ffffffff81838545>] schedule+0x35/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036567]  [<ffffffff8183b695>] schedule_timeout+0x1b5/0x270
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036568]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036570]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036571]  [<ffffffff81837a74>] io_schedule_timeout+0xa4/0x110
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036573]  [<ffffffff81838d5b>] bit_wait_io+0x1b/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036575]  [<ffffffff818388ed>] __wait_on_bit+0x5d/0x90
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036577]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036579]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036581]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036583]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036585]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036586]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036588]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036590]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036592]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036594]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036597]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036599]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036601]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036603]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036605]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036607]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036609]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036611]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036612]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036614]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036616]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036618]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036620]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036622]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036624]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036626]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036627]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036629]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036631]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036632]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036635] INFO: task kworker/u128:3:49464 blocked for more than 120 seconds.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036662]       Not tainted 4.4.0-70-generic #91-Ubuntu
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036683] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036711] kworker/u128:3  D ffff8800529df848     0 49464      2 0x00000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036715] Workqueue: events_unbound async_run_entry_fn
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036716]  ffff8800529df848 ffff88003766da50 ffff88003f862a00 ffff880058b0d400
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036717]  ffff8800529e0000 ffff88003c4d6dc0 7fffffffffffffff ffffffff81838d40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036719]  ffff8800529df9a8 ffff8800529df860 ffffffff81838545 0000000000000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036721] Call Trace:
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036723]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036724]  [<ffffffff81838545>] schedule+0x35/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036726]  [<ffffffff8183b695>] schedule_timeout+0x1b5/0x270
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036728]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036729]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036731]  [<ffffffff81837a74>] io_schedule_timeout+0xa4/0x110
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036733]  [<ffffffff81838d5b>] bit_wait_io+0x1b/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036734]  [<ffffffff818388ed>] __wait_on_bit+0x5d/0x90
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036736]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036738]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036740]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036743]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036744]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036745]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036747]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036749]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036751]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036753]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036756]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036758]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036759]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036761]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036763]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036765]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036767]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036769]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036771]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036773]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036774]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036776]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036778]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036780]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036782]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036784]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036785]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036787]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036789]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036790]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036792] INFO: task kworker/0:0:49465 blocked for more than 120 seconds.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036818]       Not tainted 4.4.0-70-generic #91-Ubuntu
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036839] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036867] kworker/0:0     D ffff8800528c3d38     0 49465      2 0x00000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036871] Workqueue: events storvsc_remove_lun [hv_storvsc]
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036871]  ffff8800528c3d38 ffff880058b09c60 ffff88003ace8000 ffff880058b09c00
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036873]  ffff8800528c4000 ffff880035604064 ffff880058b09c00 00000000ffffffff
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036875]  ffff880035604068 ffff8800528c3d50 ffffffff81838545 ffff880035604060
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036877] Call Trace:
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036879]  [<ffffffff81838545>] schedule+0x35/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036881]  [<ffffffff818387ee>] schedule_preempt_disabled+0xe/0x10
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036882]  [<ffffffff8183a429>] __mutex_lock_slowpath+0xb9/0x130
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036884]  [<ffffffff8183a4bf>] mutex_lock+0x1f/0x30
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036886]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036888]  [<ffffffffc00f5a10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036890]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036892]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036894]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036896]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036897]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036899]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036901]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036902] INFO: task kworker/u128:4:49466 blocked for more than 120 seconds.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036929]       Not tainted 4.4.0-70-generic #91-Ubuntu
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036949] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036978] kworker/u128:4  D ffff880052843848     0 49466      2 0x00000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036981] Workqueue: events_unbound async_run_entry_fn
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036982]  ffff880052843848 ffff88003766e358 ffff88003f860e00 ffff880058b0aa00
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036984]  ffff880052844000 ffff88003c456dc0 7fffffffffffffff ffffffff81838d40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036986]  ffff8800528439a8 ffff880052843860 ffffffff81838545 0000000000000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036987] Call Trace:
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036989]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036991]  [<ffffffff81838545>] schedule+0x35/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036992]  [<ffffffff8183b695>] schedule_timeout+0x1b5/0x270
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036994]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036996]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036997]  [<ffffffff81837a74>] io_schedule_timeout+0xa4/0x110
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.036999]  [<ffffffff81838d5b>] bit_wait_io+0x1b/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037001]  [<ffffffff818388ed>] __wait_on_bit+0x5d/0x90
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037003]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037005]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037007]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037009]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037011]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037169]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037171]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037173]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037175]  [<ffffffff813e2ddd>] is_gpt_valid+0x3ed/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037177]  [<ffffffff813e3342>] efi_partition+0x252/0x7d0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037179]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037181]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037183]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037185]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037187]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037189]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037191]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037193]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037195]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037196]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037198]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037200]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037202]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037204]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037206]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037207]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037209]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037211]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037212]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037214]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037216]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037218] INFO: task kworker/0:3:49468 blocked for more than 120 seconds.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037244]       Not tainted 4.4.0-70-generic #91-Ubuntu
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037264] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037292] kworker/0:3     D ffff88000aa93d38     0 49468      2 0x00000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037296] Workqueue: events storvsc_remove_lun [hv_storvsc]
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037297]  ffff88000aa93d38 ffff88003ace8060 ffff88003aceaa00 ffff88003ace8000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037298]  ffff88000aa94000 ffff880035604064 ffff88003ace8000 00000000ffffffff
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037300]  ffff880035604068 ffff88000aa93d50 ffffffff81838545 ffff880035604060
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037302] Call Trace:
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037304]  [<ffffffff81838545>] schedule+0x35/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037305]  [<ffffffff818387ee>] schedule_preempt_disabled+0xe/0x10
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037307]  [<ffffffff8183a429>] __mutex_lock_slowpath+0xb9/0x130
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037309]  [<ffffffff8183a4bf>] mutex_lock+0x1f/0x30
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037311]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037313]  [<ffffffffc00f5a10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037315]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037317]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037319]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037320]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037322]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037324]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037325]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037327] INFO: task kworker/0:4:49469 blocked for more than 120 seconds.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037353]       Not tainted 4.4.0-70-generic #91-Ubuntu
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037373] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037402] kworker/0:4     D ffff88000aa97d38     0 49469      2 0x00000000
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037405] Workqueue: events storvsc_remove_lun [hv_storvsc]
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037406]  ffff88000aa97d38 ffff88003aceaa60 ffff880035613800 ffff88003aceaa00
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037407]  ffff88000aa98000 ffff880035604064 ffff88003aceaa00 00000000ffffffff
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037409]  ffff880035604068 ffff88000aa97d50 ffffffff81838545 ffff880035604060
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037411] Call Trace:
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037413]  [<ffffffff81838545>] schedule+0x35/0x80
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037414]  [<ffffffff818387ee>] schedule_preempt_disabled+0xe/0x10
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037416]  [<ffffffff8183a429>] __mutex_lock_slowpath+0xb9/0x130
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037418]  [<ffffffff8183a4bf>] mutex_lock+0x1f/0x30
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037420]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037422]  [<ffffffffc00f5a10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037424]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037426]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037428]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037429]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037431]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037433]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  6 05:27:15 UbuntuServer01 kernel: [58200.037434]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  6 05:29:39 UbuntuServer01 docker[4042]: 2017/04/06 05:29:39 grpc: Server.processUnaryRPC failed to write status stream error: code = 4 desc = "context deadline exceeded"

    Thursday, April 6, 2017 6:40 PM

All replies

  • After the last line in the syslog there the VM is completely unresponsive in Hyper-V console.
    Thursday, April 6, 2017 6:42 PM
  • kernel: [  600.044070] INFO: task kworker/0:0:4 blocked for more than 120 seconds.
    kernel: [  600.044140]       Not tainted 4.4.0-71-generic #92-Ubuntu
    kernel: [  600.044182] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: [  600.044242] kworker/0:0     D ffff8801fd6ffc58     0     4      2 0x00000000
    kernel: [  600.044257] Workqueue: events storvsc_remove_lun [hv_storvsc]
    kernel: [  600.044261]  ffff8801fd6ffc58 ffff8801fd6ffc48 ffffffff81e11500 ffff8801fd6d2640
    kernel: [  600.044265]  ffff8801fd700000 ffffffff81ed9a30 ffff8801fbb58810 0000000000800010
    kernel: [  600.044269]  ffff8801f9d5a540 ffff8801fd6ffc70 ffffffff81838545 ffffffffffffffff
    kernel: [  600.044273] Call Trace:
    kernel: [  600.044283]  [<ffffffff81838545>] schedule+0x35/0x80
    kernel: [  600.044292]  [<ffffffff810a374e>] async_synchronize_cookie_domain+0x6e/0x150
    kernel: [  600.044299]  [<ffffffff810c4210>] ? wake_atomic_t_function+0x60/0x60
    kernel: [  600.044303]  [<ffffffff810a3848>] async_synchronize_full_domain+0x18/0x20
    kernel: [  600.044309]  [<ffffffff815cd7dd>] sd_remove+0x4d/0xc0
    kernel: [  600.044316]  [<ffffffff8155d081>] __device_release_driver+0xa1/0x150
    kernel: [  600.044320]  [<ffffffff8155d153>] device_release_driver+0x23/0x30
    kernel: [  600.044324]  [<ffffffff8155c7a1>] bus_remove_device+0x101/0x170
    kernel: [  600.044328]  [<ffffffff81558909>] device_del+0x139/0x270
    kernel: [  600.044334]  [<ffffffff815c6903>] __scsi_remove_device+0xd3/0xe0
    kernel: [  600.044338]  [<ffffffff815c6936>] scsi_remove_device+0x26/0x40
    kernel: [  600.044343]  [<ffffffffc0063a10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    kernel: [  600.044348]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    kernel: [  600.044352]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    kernel: [  600.044356]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.044360]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    kernel: [  600.044364]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.044370]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    kernel: [  600.044373]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.044380] INFO: task kworker/u128:1:30 blocked for more than 120 seconds.
    kernel: [  600.044435]       Not tainted 4.4.0-71-generic #92-Ubuntu
    kernel: [  600.044478] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: [  600.044537] kworker/u128:1  D ffff8801fcf0f848     0    30      2 0x00000000
    kernel: [  600.044545] Workqueue: events_unbound async_run_entry_fn
    kernel: [  600.044547]  ffff8801fcf0f848 ffff8801f9665a50 ffff8801fbaa3300 ffff8801fce71980
    kernel: [  600.044550]  ffff8801fcf10000 ffff8801fe616dc0 7fffffffffffffff ffffffff81838d40
    kernel: [  600.044554]  ffff8801fcf0f9a8 ffff8801fcf0f860 ffffffff81838545 0000000000000000
    kernel: [  600.044558] Call Trace:
    kernel: [  600.044562]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    kernel: [  600.044565]  [<ffffffff81838545>] schedule+0x35/0x80
    kernel: [  600.044569]  [<ffffffff8183b695>] schedule_timeout+0x1b5/0x270
    kernel: [  600.044574]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    kernel: [  600.044578]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    kernel: [  600.044581]  [<ffffffff81837a74>] io_schedule_timeout+0xa4/0x110
    kernel: [  600.044585]  [<ffffffff81838d5b>] bit_wait_io+0x1b/0x70
    kernel: [  600.044588]  [<ffffffff818388ed>] __wait_on_bit+0x5d/0x90
    kernel: [  600.044594]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    kernel: [  600.044600]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    kernel: [  600.044605]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    kernel: [  600.044609]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    kernel: [  600.044613]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    kernel: [  600.044616]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    kernel: [  600.044621]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    kernel: [  600.044625]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    kernel: [  600.044630]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    kernel: [  600.044636]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    kernel: [  600.044678]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    kernel: [  600.044683]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    kernel: [  600.044687]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    kernel: [  600.044691]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    kernel: [  600.044695]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    kernel: [  600.044699]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    kernel: [  600.044706]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    kernel: [  600.044709]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    kernel: [  600.044713]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    kernel: [  600.044719]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    kernel: [  600.044723]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    kernel: [  600.044727]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    kernel: [  600.044731]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    kernel: [  600.044736]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    kernel: [  600.044739]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    kernel: [  600.044743]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.044747]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.044750]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    kernel: [  600.044753]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.044758]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    kernel: [  600.044761]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.044769] INFO: task kworker/u128:2:121 blocked for more than 120 seconds.
    kernel: [  600.044844]       Not tainted 4.4.0-71-generic #92-Ubuntu
    kernel: [  600.044887] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: [  600.044946] kworker/u128:2  D ffff8801f7ca3848     0   121      2 0x00000000
    kernel: [  600.044954] Workqueue: events_unbound async_run_entry_fn
    kernel: [  600.044956]  ffff8801f7ca3848 ffff8801f9666c60 ffff8801f753a640 ffff880035afb300
    kernel: [  600.044959]  ffff8801f7ca4000 ffff8801fe656dc0 7fffffffffffffff ffffffff81838d40
    kernel: [  600.044963]  ffff8801f7ca39a8 ffff8801f7ca3860 ffffffff81838545 0000000000000000
    kernel: [  600.044967] Call Trace:
    kernel: [  600.044971]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    kernel: [  600.044974]  [<ffffffff81838545>] schedule+0x35/0x80
    kernel: [  600.044978]  [<ffffffff8183b695>] schedule_timeout+0x1b5/0x270
    kernel: [  600.044981]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    kernel: [  600.044984]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    kernel: [  600.044988]  [<ffffffff81837a74>] io_schedule_timeout+0xa4/0x110
    kernel: [  600.044991]  [<ffffffff81838d5b>] bit_wait_io+0x1b/0x70
    kernel: [  600.044994]  [<ffffffff818388ed>] __wait_on_bit+0x5d/0x90
    kernel: [  600.044998]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    kernel: [  600.045003]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    kernel: [  600.045007]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    kernel: [  600.045012]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    kernel: [  600.045015]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    kernel: [  600.045018]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    kernel: [  600.045022]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    kernel: [  600.045026]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    kernel: [  600.045030]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    kernel: [  600.045034]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    kernel: [  600.045039]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    kernel: [  600.045043]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    kernel: [  600.045047]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    kernel: [  600.045051]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    kernel: [  600.045055]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    kernel: [  600.045059]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    kernel: [  600.045063]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    kernel: [  600.045066]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    kernel: [  600.045070]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    kernel: [  600.045074]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    kernel: [  600.045077]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    kernel: [  600.045081]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    kernel: [  600.045085]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    kernel: [  600.045089]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    kernel: [  600.045093]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    kernel: [  600.045097]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.045099]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    kernel: [  600.045103]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.045107]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    kernel: [  600.045110]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.045115] INFO: task kworker/0:2:243 blocked for more than 120 seconds.
    kernel: [  600.045168]       Not tainted 4.4.0-71-generic #92-Ubuntu
    kernel: [  600.045233] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: [  600.045294] kworker/0:2     D ffff8800f2347d38     0   243      2 0x00000000
    kernel: [  600.045301] Workqueue: events storvsc_remove_lun [hv_storvsc]
    kernel: [  600.045303]  ffff8800f2347d38 ffff8801f7d16660 ffff8800343acc80 ffff8801f7d16600
    kernel: [  600.045307]  ffff8800f2348000 ffff880035a1f864 ffff8801f7d16600 00000000ffffffff
    kernel: [  600.045310]  ffff880035a1f868 ffff8800f2347d50 ffffffff81838545 ffff880035a1f860
    kernel: [  600.045314] Call Trace:
    kernel: [  600.045319]  [<ffffffff81838545>] schedule+0x35/0x80
    kernel: [  600.045323]  [<ffffffff818387ee>] schedule_preempt_disabled+0xe/0x10
    kernel: [  600.045326]  [<ffffffff8183a429>] __mutex_lock_slowpath+0xb9/0x130
    kernel: [  600.045330]  [<ffffffff8183a4bf>] mutex_lock+0x1f/0x30
    kernel: [  600.045335]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    kernel: [  600.045339]  [<ffffffffc0063a10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    kernel: [  600.045343]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    kernel: [  600.045347]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    kernel: [  600.045351]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.045354]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    kernel: [  600.045357]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.045361]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    kernel: [  600.045364]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.045401] INFO: task kworker/u128:0:2524 blocked for more than 120 seconds.
    kernel: [  600.045478]       Not tainted 4.4.0-71-generic #92-Ubuntu
    kernel: [  600.045521] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: [  600.045580] kworker/u128:0  D ffff8800f1e63848     0  2524      2 0x00000000
    kernel: [  600.045588] Workqueue: events_unbound async_run_entry_fn
    kernel: [  600.045590]  ffff8800f1e63848 ffff8801f9667568 ffffffff81e11500 ffff8801fbaa3fc0
    kernel: [  600.045594]  ffff8800f1e64000 ffff8801fe616dc0 7fffffffffffffff ffffffff81838d40
    kernel: [  600.045597]  ffff8800f1e639a8 ffff8800f1e63860 ffffffff81838545 0000000000000000
    kernel: [  600.045601] Call Trace:
    kernel: [  600.045605]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    kernel: [  600.045608]  [<ffffffff81838545>] schedule+0x35/0x80
    kernel: [  600.045612]  [<ffffffff8183b695>] schedule_timeout+0x1b5/0x270
    kernel: [  600.045615]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    kernel: [  600.045619]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    kernel: [  600.045622]  [<ffffffff81837a74>] io_schedule_timeout+0xa4/0x110
    kernel: [  600.045626]  [<ffffffff81838d5b>] bit_wait_io+0x1b/0x70
    kernel: [  600.045629]  [<ffffffff818388ed>] __wait_on_bit+0x5d/0x90
    kernel: [  600.045633]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    kernel: [  600.045637]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    kernel: [  600.045642]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    kernel: [  600.045646]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    kernel: [  600.045649]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    kernel: [  600.045652]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    kernel: [  600.045656]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    kernel: [  600.045660]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    kernel: [  600.045664]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    kernel: [  600.045668]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    kernel: [  600.045673]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    kernel: [  600.045677]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    kernel: [  600.045681]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    kernel: [  600.045685]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    kernel: [  600.045688]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    kernel: [  600.045692]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    kernel: [  600.045697]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    kernel: [  600.045721]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    kernel: [  600.045725]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    kernel: [  600.045729]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    kernel: [  600.045732]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    kernel: [  600.045736]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    kernel: [  600.045740]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    kernel: [  600.045744]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    kernel: [  600.045748]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    kernel: [  600.045751]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.045755]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.045758]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    kernel: [  600.045761]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.045766]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    kernel: [  600.045769]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.045773] INFO: task kworker/0:1:2549 blocked for more than 120 seconds.
    kernel: [  600.045827]       Not tainted 4.4.0-71-generic #92-Ubuntu
    kernel: [  600.045869] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: [  600.045927] kworker/0:1     D ffff8801fba9fd38     0  2549      2 0x00000000
    kernel: [  600.045934] Workqueue: events storvsc_remove_lun [hv_storvsc]
    kernel: [  600.045936]  ffff8801fba9fd38 ffffffff810b731b ffff8801f74abfc0 ffff8801fbaa3300
    kernel: [  600.045939]  ffff8801fbaa0000 ffff880035a1f864 ffff8801fbaa3300 00000000ffffffff
    kernel: [  600.045943]  ffff880035a1f868 ffff8801fba9fd50 ffffffff81838545 ffff880035a1f860
    kernel: [  600.045946] Call Trace:
    kernel: [  600.045950]  [<ffffffff810b731b>] ? dequeue_entity+0x41b/0xa80
    kernel: [  600.045954]  [<ffffffff81838545>] schedule+0x35/0x80
    kernel: [  600.045957]  [<ffffffff818387ee>] schedule_preempt_disabled+0xe/0x10
    kernel: [  600.045961]  [<ffffffff8183a429>] __mutex_lock_slowpath+0xb9/0x130
    kernel: [  600.045965]  [<ffffffff8183a4bf>] mutex_lock+0x1f/0x30
    kernel: [  600.045969]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    kernel: [  600.045973]  [<ffffffffc0063a10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    kernel: [  600.045977]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    kernel: [  600.045981]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    kernel: [  600.045985]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.045989]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.045992]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    kernel: [  600.045995]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.045999]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    kernel: [  600.046004]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.046029] INFO: task kworker/u128:3:2587 blocked for more than 120 seconds.
    kernel: [  600.046085]       Not tainted 4.4.0-71-generic #92-Ubuntu
    kernel: [  600.046126] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: [  600.046185] kworker/u128:3  D ffff8801fb5df848     0  2587      2 0x00000000
    kernel: [  600.046192] Workqueue: events_unbound async_run_entry_fn
    kernel: [  600.046194]  ffff8801fb5df848 ffff8801f9666358 ffff8801fa3da640 ffff8801fb6bd940
    kernel: [  600.046197]  ffff8801fb5e0000 ffff8801fe656dc0 7fffffffffffffff ffffffff81838d40
    kernel: [  600.046201]  ffff8801fb5df9a8 ffff8801fb5df860 ffffffff81838545 0000000000000000
    kernel: [  600.046204] Call Trace:
    kernel: [  600.046209]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    kernel: [  600.046212]  [<ffffffff81838545>] schedule+0x35/0x80
    kernel: [  600.046215]  [<ffffffff8183b695>] schedule_timeout+0x1b5/0x270
    kernel: [  600.046218]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    kernel: [  600.046222]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    kernel: [  600.046225]  [<ffffffff81837a74>] io_schedule_timeout+0xa4/0x110
    kernel: [  600.046229]  [<ffffffff81838d5b>] bit_wait_io+0x1b/0x70
    kernel: [  600.046232]  [<ffffffff818388ed>] __wait_on_bit+0x5d/0x90
    kernel: [  600.046236]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    kernel: [  600.046240]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    kernel: [  600.046245]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    kernel: [  600.046249]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    kernel: [  600.046251]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    kernel: [  600.046254]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    kernel: [  600.046258]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    kernel: [  600.046262]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    kernel: [  600.046266]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    kernel: [  600.046271]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    kernel: [  600.046275]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    kernel: [  600.046279]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    kernel: [  600.046283]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    kernel: [  600.046287]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    kernel: [  600.046291]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    kernel: [  600.046294]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    kernel: [  600.046299]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    kernel: [  600.046302]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    kernel: [  600.046306]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    kernel: [  600.046309]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    kernel: [  600.046313]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    kernel: [  600.046317]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    kernel: [  600.046320]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    kernel: [  600.046324]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    kernel: [  600.046328]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    kernel: [  600.046332]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.046335]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.046338]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    kernel: [  600.046342]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.046346]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    kernel: [  600.046349]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.046352] INFO: task kworker/0:3:2590 blocked for more than 120 seconds.
    kernel: [  600.046405]       Not tainted 4.4.0-71-generic #92-Ubuntu
    kernel: [  600.046447] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: [  600.046505] kworker/0:3     D ffff8801fb7e7d38     0  2590      2 0x00000000
    kernel: [  600.046511] Workqueue: events storvsc_remove_lun [hv_storvsc]
    kernel: [  600.046513]  ffff8801fb7e7d38 ffff8801fb6be660 ffff8801fb6bb300 ffff8801fb6be600
    kernel: [  600.046517]  ffff8801fb7e8000 ffff880035a1f864 ffff8801fb6be600 00000000ffffffff
    kernel: [  600.046520]  ffff880035a1f868 ffff8801fb7e7d50 ffffffff81838545 ffff880035a1f860
    kernel: [  600.046524] Call Trace:
    kernel: [  600.046528]  [<ffffffff81838545>] schedule+0x35/0x80
    kernel: [  600.046531]  [<ffffffff818387ee>] schedule_preempt_disabled+0xe/0x10
    kernel: [  600.046535]  [<ffffffff8183a429>] __mutex_lock_slowpath+0xb9/0x130
    kernel: [  600.046538]  [<ffffffff8183a4bf>] mutex_lock+0x1f/0x30
    kernel: [  600.046543]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    kernel: [  600.046547]  [<ffffffffc0063a10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    kernel: [  600.046551]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    kernel: [  600.046554]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    kernel: [  600.046558]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.046561]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    kernel: [  600.046564]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.046569]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    kernel: [  600.046572]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.046574] INFO: task kworker/u128:4:2596 blocked for more than 120 seconds.
    kernel: [  600.046629]       Not tainted 4.4.0-71-generic #92-Ubuntu
    kernel: [  600.046671] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: [  600.046753] kworker/u128:4  D ffff8801faca3848     0  2596      2 0x00000000
    kernel: [  600.046761] Workqueue: events_unbound async_run_entry_fn
    kernel: [  600.046763]  ffff8801faca3848 ffff8801fb4d0000 ffff8800343b0cc0 ffff8801f753a640
    kernel: [  600.046766]  ffff8801faca4000 ffff8801fe656dc0 7fffffffffffffff ffffffff81838d40
    kernel: [  600.046770]  ffff8801faca39a8 ffff8801faca3860 ffffffff81838545 0000000000000000
    kernel: [  600.046773] Call Trace:
    kernel: [  600.046778]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    kernel: [  600.046781]  [<ffffffff81838545>] schedule+0x35/0x80
    kernel: [  600.046784]  [<ffffffff8183b695>] schedule_timeout+0x1b5/0x270
    kernel: [  600.046788]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    kernel: [  600.046791]  [<ffffffff81838d40>] ? bit_wait+0x60/0x60
    kernel: [  600.046794]  [<ffffffff81837a74>] io_schedule_timeout+0xa4/0x110
    kernel: [  600.046798]  [<ffffffff81838d5b>] bit_wait_io+0x1b/0x70
    kernel: [  600.046801]  [<ffffffff818388ed>] __wait_on_bit+0x5d/0x90
    kernel: [  600.046805]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    kernel: [  600.046809]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    kernel: [  600.046814]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    kernel: [  600.046818]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    kernel: [  600.046821]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    kernel: [  600.046824]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    kernel: [  600.046828]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    kernel: [  600.046832]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    kernel: [  600.046836]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    kernel: [  600.046840]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    kernel: [  600.046845]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    kernel: [  600.046849]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    kernel: [  600.046853]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    kernel: [  600.046856]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    kernel: [  600.046860]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    kernel: [  600.046864]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    kernel: [  600.046868]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    kernel: [  600.046872]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    kernel: [  600.046875]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    kernel: [  600.046879]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    kernel: [  600.046882]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    kernel: [  600.046886]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    kernel: [  600.046890]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    kernel: [  600.046894]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    kernel: [  600.046897]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    kernel: [  600.046901]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.046904]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    kernel: [  600.046907]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.046912]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    kernel: [  600.046915]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.046918] INFO: task kworker/0:4:2598 blocked for more than 120 seconds.
    kernel: [  600.046972]       Not tainted 4.4.0-71-generic #92-Ubuntu
    kernel: [  600.047013] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: [  600.047071] kworker/0:4     D ffff8801facabd38     0  2598      2 0x00000000
    kernel: [  600.047078] Workqueue: events storvsc_remove_lun [hv_storvsc]
    kernel: [  600.047079]  ffff8801facabd38 ffff8801fb6bb360 ffff8801fb6b9980 ffff8801fb6bb300
    kernel: [  600.047083]  ffff8801facac000 ffff880035a1f864 ffff8801fb6bb300 00000000ffffffff
    kernel: [  600.047086]  ffff880035a1f868 ffff8801facabd50 ffffffff81838545 ffff880035a1f860
    kernel: [  600.047090] Call Trace:
    kernel: [  600.047094]  [<ffffffff81838545>] schedule+0x35/0x80
    kernel: [  600.047098]  [<ffffffff818387ee>] schedule_preempt_disabled+0xe/0x10
    kernel: [  600.047101]  [<ffffffff8183a429>] __mutex_lock_slowpath+0xb9/0x130
    kernel: [  600.047105]  [<ffffffff8183a4bf>] mutex_lock+0x1f/0x30
    kernel: [  600.047109]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    kernel: [  600.047113]  [<ffffffffc0063a10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    kernel: [  600.047117]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    kernel: [  600.047121]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    kernel: [  600.047125]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    kernel: [  600.047128]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    kernel: [  600.047131]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    kernel: [  600.047135]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    kernel: [  600.047138]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0

    on console:


    • Edited by openmsk Friday, April 7, 2017 8:00 AM
    Friday, April 7, 2017 7:58 AM
  • Like DBWYCL, we're experiencing the same thing.  This is new behaviour...perhaps due to a recent Windows Update?  The Ubuntu VM has Linux Integration Services installed and running as well.  This server's nightly backup task runs at midnight, and then things get crazy until 7am when someone forcibly restarts the VM after complaints of non-responsiveness:

    Apr  6 23:59:56 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 2h 5min 44.111886s random time.
    Apr  6 23:59:56 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 1h 25min 13.300495s random time.
    Apr  7 00:00:00 xTuple-HyperV-VM Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  7 00:00:01 xTuple-HyperV-VM Hyper-V message repeated 3 times: [ VSS: VSS: op=CHECK HOT BACKUP]
    Apr  7 00:00:01 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:01 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:01 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 44min 13.538605s random time.
    Apr  7 00:00:01 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 7h 9min 15.636454s random time.
    Apr  7 00:00:01 xTuple-HyperV-VM Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  7 00:00:01 xTuple-HyperV-VM Hyper-V message repeated 4 times: [ VSS: VSS: op=CHECK HOT BACKUP]
    Apr  7 00:00:06 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:06 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:06 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 2h 42min 53.269062s random time.
    Apr  7 00:00:06 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 10h 10min 46.703990s random time.
    Apr  7 00:00:11 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:11 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:11 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 1h 50min 24.734777s random time.
    Apr  7 00:00:11 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 6h 3min 17.954694s random time.
    Apr  7 00:00:12 xTuple-HyperV-VM Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  7 00:00:16 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:16 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 1h 16min 30.748136s random time.
    Apr  7 00:00:16 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 4h 44min 56.028079s random time.
    Apr  7 00:00:16 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:21 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:21 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:21 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 2h 36min 37.136981s random time.
    Apr  7 00:00:21 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 8h 28min 54.280597s random time.
    Apr  7 00:00:23 xTuple-HyperV-VM Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  7 00:00:24 xTuple-HyperV-VM Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  7 00:00:26 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:26 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 4h 28min 37.677351s random time.
    Apr  7 00:00:26 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 11h 42min 12.922515s random time.
    Apr  7 00:00:26 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:31 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:31 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:31 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 3h 1min 42.803493s random time.
    Apr  7 00:00:31 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 9h 31min 51.138055s random time.
    Apr  7 00:00:36 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:36 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 5h 41min 12.358710s random time.
    Apr  7 00:00:36 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 4h 59min 34.504561s random time.
    Apr  7 00:00:36 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:36 xTuple-HyperV-VM Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  7 00:00:41 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:41 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:41 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 1h 12min 37.977671s random time.
    Apr  7 00:00:41 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 1h 45min 8.828685s random time.
    Apr  7 00:00:46 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:46 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 5h 10min 51.830719s random time.
    Apr  7 00:00:46 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 22min 12.303373s random time.
    Apr  7 00:00:46 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:51 xTuple-HyperV-VM Hyper-V VSS: VSS: op=CHECK HOT BACKUP
    Apr  7 00:00:51 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:51 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:51 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 4h 44min 47.647109s random time.
    Apr  7 00:00:51 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 7h 9min 44.744761s random time.
    Apr  7 00:00:51 xTuple-HyperV-VM Hyper-V VSS: VSS: op=FREEZE: succeeded
    Apr  7 00:00:51 xTuple-HyperV-VM systemd[4048]: Time has been changed
    Apr  7 00:00:51 xTuple-HyperV-VM systemd[1]: Time has been changed
    Apr  7 00:00:51 xTuple-HyperV-VM systemd[1]: snapd.refresh.timer: Adding 3h 54min 32.282243s random time.
    Apr  7 00:00:51 xTuple-HyperV-VM systemd[1]: apt-daily.timer: Adding 11h 7min 32.706534s random time.
    Apr  7 00:00:51 xTuple-HyperV-VM Hyper-V VSS: VSS: op=THAW: succeeded
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.900889] scsi host0: scsi scan: INQUIRY result too short (5), using 36
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.900899] scsi 0:0:0:2: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.901265] scsi 0:0:0:3: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.901595] scsi 0:0:0:4: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.901889] scsi 0:0:0:5: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.902177] scsi 0:0:0:6: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.902473] scsi 0:0:0:7: Direct-Access                                    PQ: 0 ANSI: 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.921178] sd 0:0:0:2: Attached scsi generic sg2 type 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.924232] sd 0:0:0:2: [sdb] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.924242] sd 0:0:0:2: [sdb] 1 512-byte logical blocks: (512 B/512 B)
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.924246] sd 0:0:0:2: [sdb] 0-byte physical blocks
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.924362] sd 0:0:0:3: Attached scsi generic sg3 type 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.924634] sd 0:0:0:2: [sdb] Write Protect is off
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.924640] sd 0:0:0:2: [sdb] Mode Sense: 00 00 00 00
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.927353] sd 0:0:0:2: [sdb] Asking for cache data failed
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.927361] sd 0:0:0:2: [sdb] Assuming drive cache: write through
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.928240] sd 0:0:0:3: [sdc] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.928259] sd 0:0:0:3: [sdc] 1 512-byte logical blocks: (512 B/512 B)
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.928263] sd 0:0:0:3: [sdc] 0-byte physical blocks
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.928617] sd 0:0:0:2: [sdb] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.928759] sd 0:0:0:4: Attached scsi generic sg4 type 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.932252] sd 0:0:0:4: [sdd] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.932261] sd 0:0:0:4: [sdd] 1 512-byte logical blocks: (512 B/512 B)
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.932265] sd 0:0:0:4: [sdd] 0-byte physical blocks
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.932324] sd 0:0:0:4: [sdd] Write Protect is off
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.932328] sd 0:0:0:4: [sdd] Mode Sense: 00 00 00 00
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.932382] sd 0:0:0:4: [sdd] Asking for cache data failed
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.932386] sd 0:0:0:4: [sdd] Assuming drive cache: write through
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.933925] sd 0:0:0:3: [sdc] Write Protect is off
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.933932] sd 0:0:0:3: [sdc] Mode Sense: 00 00 00 00
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.934533] sd 0:0:0:5: Attached scsi generic sg5 type 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.935008] sd 0:0:0:6: Attached scsi generic sg6 type 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.936078] sd 0:0:0:7: Attached scsi generic sg7 type 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.936149] sd 0:0:0:4: [sdd] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.937252] sd 0:0:0:5: [sde] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.937260] sd 0:0:0:5: [sde] 1 512-byte logical blocks: (512 B/512 B)
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.937264] sd 0:0:0:5: [sde] 0-byte physical blocks
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.937868] sd 0:0:0:5: [sde] Write Protect is off
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.937874] sd 0:0:0:5: [sde] Mode Sense: 00 00 00 00
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.939525] sd 0:0:0:5: [sde] Asking for cache data failed
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.939532] sd 0:0:0:5: [sde] Assuming drive cache: write through
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.939669] sd 0:0:0:3: [sdc] Asking for cache data failed
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.939676] sd 0:0:0:3: [sdc] Assuming drive cache: write through
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.940329] sd 0:0:0:5: [sde] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.940894] sd 0:0:0:6: [sdf] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.940903] sd 0:0:0:6: [sdf] 1 512-byte logical blocks: (512 B/512 B)
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.940906] sd 0:0:0:6: [sdf] 0-byte physical blocks
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.941736] ldm_validate_partition_table(): Disk read failed.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.941762] Dev sdb: unable to read RDB block 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.941789]  sdb: unable to read partition table
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.941797] sdb: partition table beyond EOD, enabling native capacity
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.942144] sd 0:0:0:3: [sdc] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.942442] sd 0:0:0:6: [sdf] Write Protect is off
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.942448] sd 0:0:0:6: [sdf] Mode Sense: 00 00 00 00
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.944121] sd 0:0:0:6: [sdf] Asking for cache data failed
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.944128] sd 0:0:0:6: [sdf] Assuming drive cache: write through
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.945147] sd 0:0:0:7: [sdg] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.945156] sd 0:0:0:7: [sdg] 1 512-byte logical blocks: (512 B/512 B)
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.945159] sd 0:0:0:7: [sdg] 0-byte physical blocks
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.946534] sd 0:0:0:6: [sdf] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.946938] sd 0:0:0:7: [sdg] Write Protect is off
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.946945] sd 0:0:0:7: [sdg] Mode Sense: 00 00 00 00
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.947077] sd 0:0:0:7: [sdg] Asking for cache data failed
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.947082] sd 0:0:0:7: [sdg] Assuming drive cache: write through
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.948153] sd 0:0:0:7: [sdg] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.985486] sd 0:0:0:2: [sdb] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.985810] ldm_validate_partition_table(): Disk read failed.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.985846] Dev sdb: unable to read RDB block 0
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.985896]  sdb: unable to read partition table
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.985902] sdb: partition table beyond EOD, truncated
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.986318] sd 0:0:0:2: [sdb] Sector size 0 reported, assuming 512.
    Apr  7 00:00:51 xTuple-HyperV-VM kernel: [57117.988176] sd 0:0:0:2: [sdb] Attached SCSI disk
    Apr  7 00:00:51 xTuple-HyperV-VM systemd-udevd[11661]: Process '/lib/udev/hdparm' failed with exit code 6.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048134] INFO: task kworker/u128:1:7603 blocked for more than 120 seconds.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048143]       Not tainted 4.4.0-72-generic #93-Ubuntu
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048147] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048179] kworker/u128:1  D ffff8800f0463848     0  7603      2 0x00000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048195] Workqueue: events_unbound async_run_entry_fn
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048199]  ffff8800f0463848 ffff8800ed198000 ffff8800ee53c600 ffff880137fe8000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048204]  ffff8800f0464000 ffff880102656dc0 7fffffffffffffff ffffffff81838d60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048208]  ffff8800f04639a8 ffff8800f0463860 ffffffff81838565 0000000000000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048212] Call Trace:
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048238]  [<ffffffff81838d60>] ? bit_wait+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048242]  [<ffffffff81838565>] schedule+0x35/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048247]  [<ffffffff8183b6b5>] schedule_timeout+0x1b5/0x270
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048253]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048257]  [<ffffffff81838d60>] ? bit_wait+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048260]  [<ffffffff81837a94>] io_schedule_timeout+0xa4/0x110
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048274]  [<ffffffff81838d7b>] bit_wait_io+0x1b/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048278]  [<ffffffff8183890d>] __wait_on_bit+0x5d/0x90
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048285]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048292]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048301]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048305]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048308]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048311]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048318]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048324]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048339]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048346]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048351]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048357]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048365]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048369]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048374]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048378]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048388]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048402]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048406]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048413]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048417]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048425]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048429]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048435]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048439]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048443]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048447]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048451]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048457]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048472]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048475]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048481] INFO: task kworker/0:2:11067 blocked for more than 120 seconds.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048484]       Not tainted 4.4.0-72-generic #93-Ubuntu
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048486] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048488] kworker/0:2     D ffff8800efc0fd38     0 11067      2 0x00000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048499] Workqueue: events storvsc_remove_lun [hv_storvsc]
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048501]  ffff8800efc0fd38 ffff8800ee76aa60 ffff8800f0df0000 ffff8800ee76aa00
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048505]  ffff8800efc10000 ffff88003565d064 ffff8800ee76aa00 00000000ffffffff
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048513]  ffff88003565d068 ffff8800efc0fd50 ffffffff81838565 ffff88003565d060
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048526] Call Trace:
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048532]  [<ffffffff81838565>] schedule+0x35/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048536]  [<ffffffff8183880e>] schedule_preempt_disabled+0xe/0x10
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048540]  [<ffffffff8183a449>] __mutex_lock_slowpath+0xb9/0x130
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048544]  [<ffffffff8183a4df>] mutex_lock+0x1f/0x30
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048551]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048556]  [<ffffffffc008aa10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048560]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048564]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048580]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048584]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048587]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048592]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048596]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048600] INFO: task kworker/0:0:11303 blocked for more than 120 seconds.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048602]       Not tainted 4.4.0-72-generic #93-Ubuntu
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048604] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048606] kworker/0:0     D ffff880070b63d38     0 11303      2 0x00000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048613] Workqueue: events storvsc_remove_lun [hv_storvsc]
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048614]  ffff880070b63d38 ffff880070b63dc0 ffffffff81e11500 ffff8800f0df5400
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048618]  ffff880070b64000 ffff88003565d064 ffff8800f0df5400 00000000ffffffff
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048634]  ffff88003565d068 ffff880070b63d50 ffffffff81838565 ffff88003565d060
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048638] Call Trace:
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048642]  [<ffffffff81838565>] schedule+0x35/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048645]  [<ffffffff8183880e>] schedule_preempt_disabled+0xe/0x10
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048649]  [<ffffffff8183a449>] __mutex_lock_slowpath+0xb9/0x130
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048653]  [<ffffffff8183a4df>] mutex_lock+0x1f/0x30
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048658]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048663]  [<ffffffffc008aa10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048667]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048671]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048677]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048690]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048694]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048699]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048703]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048706]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048710] INFO: task kworker/0:1:11563 blocked for more than 120 seconds.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048712]       Not tainted 4.4.0-72-generic #93-Ubuntu
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048713] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048716] kworker/0:1     D ffff8800f73cbc58     0 11563      2 0x00000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048723] Workqueue: events storvsc_remove_lun [hv_storvsc]
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048725]  ffff8800f73cbc58 ffff8800f73cbc48 ffff880076950000 ffff880137f89c00
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048729]  ffff8800f73cc000 ffffffff81ed9a30 ffff880003943410 0000000000800010
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048733]  ffff8800f05d4180 ffff8800f73cbc70 ffffffff81838565 ffffffffffffffff
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048739] Call Trace:
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048742]  [<ffffffff81838565>] schedule+0x35/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048747]  [<ffffffff810a374e>] async_synchronize_cookie_domain+0x6e/0x150
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048752]  [<ffffffff810c4210>] ? wake_atomic_t_function+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048758]  [<ffffffff810a3848>] async_synchronize_full_domain+0x18/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048761]  [<ffffffff815cd7dd>] sd_remove+0x4d/0xc0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048769]  [<ffffffff8155d081>] __device_release_driver+0xa1/0x150
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048773]  [<ffffffff8155d153>] device_release_driver+0x23/0x30
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048779]  [<ffffffff8155c7a1>] bus_remove_device+0x101/0x170
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048783]  [<ffffffff81558909>] device_del+0x139/0x270
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048790]  [<ffffffff815c6903>] __scsi_remove_device+0xd3/0xe0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048794]  [<ffffffff815c6936>] scsi_remove_device+0x26/0x40
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048800]  [<ffffffffc008aa10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048813]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048818]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048823]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048827]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048830]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048833]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048840]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048843]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048850] INFO: task kworker/u128:2:11600 blocked for more than 120 seconds.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048855]       Not tainted 4.4.0-72-generic #93-Ubuntu
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048857] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048859] kworker/u128:2  D ffff8800ed343848     0 11600      2 0x00000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048877] Workqueue: events_unbound async_run_entry_fn
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048887]  ffff8800ed343848 ffff8800ed19a420 ffff8800ed6ce200 ffff880137fbe200
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048891]  ffff8800ed344000 ffff880102696dc0 7fffffffffffffff ffffffff81838d60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048896]  ffff8800ed3439a8 ffff8800ed343860 ffffffff81838565 0000000000000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048902] Call Trace:
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048909]  [<ffffffff81838d60>] ? bit_wait+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048912]  [<ffffffff81838565>] schedule+0x35/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048916]  [<ffffffff8183b6b5>] schedule_timeout+0x1b5/0x270
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048919]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048930]  [<ffffffff81838d60>] ? bit_wait+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048936]  [<ffffffff81837a94>] io_schedule_timeout+0xa4/0x110
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048942]  [<ffffffff81838d7b>] bit_wait_io+0x1b/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048945]  [<ffffffff8183890d>] __wait_on_bit+0x5d/0x90
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048950]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048962]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048968]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048972]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048975]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048978]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.048984]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049007]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049011]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049019]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049026]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049030]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049036]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049040]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049044]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049051]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049058]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049062]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049067]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049074]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049078]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049085]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049094]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049114]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049118]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049122]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049127]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049131]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049135]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049138]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049141] INFO: task kworker/u128:3:11601 blocked for more than 120 seconds.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049143]       Not tainted 4.4.0-72-generic #93-Ubuntu
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049146] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049147] kworker/u128:3  D ffff8800f09d3848     0 11601      2 0x00000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049153] Workqueue: events_unbound async_run_entry_fn
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049156]  ffff8800f09d3848 ffff8800ed199210 ffff8800ee53d400 ffff880137fbaa00
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049172]  ffff8800f09d4000 ffff8801026d6dc0 7fffffffffffffff ffffffff81838d60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049178]  ffff8800f09d39a8 ffff8800f09d3860 ffffffff81838565 0000000000000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049181] Call Trace:
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049185]  [<ffffffff81838d60>] ? bit_wait+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049189]  [<ffffffff81838565>] schedule+0x35/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049192]  [<ffffffff8183b6b5>] schedule_timeout+0x1b5/0x270
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049195]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049199]  [<ffffffff81838d60>] ? bit_wait+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049213]  [<ffffffff81837a94>] io_schedule_timeout+0xa4/0x110
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049217]  [<ffffffff81838d7b>] bit_wait_io+0x1b/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049220]  [<ffffffff8183890d>] __wait_on_bit+0x5d/0x90
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049224]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049228]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049233]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049239]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049242]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049245]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049248]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049252]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049258]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049273]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049278]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049282]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049287]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049292]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049296]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049300]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049304]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049308]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049322]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049326]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049329]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049333]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049337]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049341]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049346]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049351]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049355]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049358]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049362]  [<ffffffff810a9bcd>] ? finish_task_switch+0x7d/0x220
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049366]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049372]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049375]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049378] INFO: task kworker/u128:4:11602 blocked for more than 120 seconds.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049383]       Not tainted 4.4.0-72-generic #93-Ubuntu
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049385] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049387] kworker/u128:4  D ffff8800ed65b848     0 11602      2 0x00000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049398] Workqueue: events_unbound async_run_entry_fn
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049401]  ffff8800ed65b848 ffff8800ed198908 ffff8800f0c98e00 ffff880137fb9c00
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049405]  ffff8800ed65c000 ffff880102656dc0 7fffffffffffffff ffffffff81838d60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049409]  ffff8800ed65b9a8 ffff8800ed65b860 ffffffff81838565 0000000000000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049413] Call Trace:
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049416]  [<ffffffff81838d60>] ? bit_wait+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049419]  [<ffffffff81838565>] schedule+0x35/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049423]  [<ffffffff8183b6b5>] schedule_timeout+0x1b5/0x270
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049426]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049431]  [<ffffffff81838d60>] ? bit_wait+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049434]  [<ffffffff81837a94>] io_schedule_timeout+0xa4/0x110
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049438]  [<ffffffff81838d7b>] bit_wait_io+0x1b/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049441]  [<ffffffff8183890d>] __wait_on_bit+0x5d/0x90
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049451]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049474]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049478]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049483]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049487]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049490]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049494]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049498]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049502]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049507]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049512]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049516]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049520]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049525]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049540]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049545]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049550]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049555]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049558]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049562]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049565]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049569]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049573]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049577]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049581]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049599]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049603]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049606]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049610]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049615]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049617]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049621] INFO: task kworker/0:3:11607 blocked for more than 120 seconds.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049634]       Not tainted 4.4.0-72-generic #93-Ubuntu
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049636] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049637] kworker/0:3     D ffff88013c043d38     0 11607      2 0x00000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049646] Workqueue: events storvsc_remove_lun [hv_storvsc]
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049648]  ffff88013c043d38 ffff8801026142c0 ffff8800f0c98e00 ffff8800f0c9aa00
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049651]  ffff88013c044000 ffff88003565d064 ffff8800f0c9aa00 00000000ffffffff
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049655]  ffff88003565d068 ffff88013c043d50 ffffffff81838565 ffff88003565d060
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049663] Call Trace:
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049667]  [<ffffffff81838565>] schedule+0x35/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049670]  [<ffffffff8183880e>] schedule_preempt_disabled+0xe/0x10
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049674]  [<ffffffff8183a449>] __mutex_lock_slowpath+0xb9/0x130
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049680]  [<ffffffff8183a4df>] mutex_lock+0x1f/0x30
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049695]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049699]  [<ffffffffc008aa10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049703]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049707]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049711]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049714]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049718]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049722]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049726]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049729] INFO: task kworker/0:4:11613 blocked for more than 120 seconds.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049731]       Not tainted 4.4.0-72-generic #93-Ubuntu
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049732] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049734] kworker/0:4     D ffff88011273fd38     0 11613      2 0x00000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049740] Workqueue: events storvsc_remove_lun [hv_storvsc]
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049742]  ffff88011273fd38 ffff8800f0c98e60 ffff8800efd52a00 ffff8800f0c98e00
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049745]  ffff880112740000 ffff88003565d064 ffff8800f0c98e00 00000000ffffffff
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049762]  ffff88003565d068 ffff88011273fd50 ffffffff81838565 ffff88003565d060
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049766] Call Trace:
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049769]  [<ffffffff81838565>] schedule+0x35/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049773]  [<ffffffff8183880e>] schedule_preempt_disabled+0xe/0x10
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049776]  [<ffffffff8183a449>] __mutex_lock_slowpath+0xb9/0x130
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049780]  [<ffffffff8183a4df>] mutex_lock+0x1f/0x30
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049784]  [<ffffffff815c692e>] scsi_remove_device+0x1e/0x40
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049789]  [<ffffffffc008aa10>] storvsc_remove_lun+0x40/0x60 [hv_storvsc]
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049794]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049798]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049802]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049805]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049808]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049813]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049818]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049832] INFO: task kworker/u128:5:11614 blocked for more than 120 seconds.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049834]       Not tainted 4.4.0-72-generic #93-Ubuntu
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049836] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049839] kworker/u128:5  D ffff88013c027848     0 11614      2 0x00000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049845] Workqueue: events_unbound async_run_entry_fn
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049847]  ffff88013c027848 ffff8800ed19ad28 ffff8801014f8e00 ffff8800efd55400
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049851]  ffff88013c028000 ffff880102656dc0 7fffffffffffffff ffffffff81838d60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049855]  ffff88013c0279a8 ffff88013c027860 ffffffff81838565 0000000000000000
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049858] Call Trace:
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049862]  [<ffffffff81838d60>] ? bit_wait+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049865]  [<ffffffff81838565>] schedule+0x35/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049869]  [<ffffffff8183b6b5>] schedule_timeout+0x1b5/0x270
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049874]  [<ffffffff813c88a6>] ? submit_bio+0x76/0x170
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049879]  [<ffffffff81838d60>] ? bit_wait+0x60/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049894]  [<ffffffff81837a94>] io_schedule_timeout+0xa4/0x110
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049897]  [<ffffffff81838d7b>] bit_wait_io+0x1b/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049900]  [<ffffffff8183890d>] __wait_on_bit+0x5d/0x90
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049904]  [<ffffffff8124a9c0>] ? blkdev_readpages+0x20/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049909]  [<ffffffff8118e3cb>] wait_on_page_bit+0xcb/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049913]  [<ffffffff810c4250>] ? autoremove_wake_function+0x40/0x40
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049918]  [<ffffffff8118e5f9>] wait_on_page_read+0x49/0x50
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049921]  [<ffffffff8118fc6d>] do_read_cache_page+0x8d/0x1b0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049924]  [<ffffffff8118fda9>] read_cache_page+0x19/0x20
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049930]  [<ffffffff813dbe7d>] read_dev_sector+0x2d/0x90
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049935]  [<ffffffff813e292d>] read_lba+0x14d/0x210
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049950]  [<ffffffff813e31e2>] efi_partition+0xf2/0x7d0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049956]  [<ffffffff814027ab>] ? string.isra.4+0x3b/0xd0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049961]  [<ffffffff814046e9>] ? snprintf+0x49/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049966]  [<ffffffff813e30f0>] ? compare_gpts+0x280/0x280
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049969]  [<ffffffff813dd25e>] check_partition+0x13e/0x220
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049973]  [<ffffffff813dc760>] rescan_partitions+0xc0/0x2b0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049977]  [<ffffffff8124b8bd>] __blkdev_get+0x30d/0x460
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049981]  [<ffffffff8124be79>] blkdev_get+0x129/0x330
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049986]  [<ffffffff81229b89>] ? unlock_new_inode+0x49/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049991]  [<ffffffff8124a858>] ? bdget+0x118/0x130
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049995]  [<ffffffff813da20e>] add_disk+0x3fe/0x490
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.049998]  [<ffffffff81569241>] ? update_autosuspend+0x51/0x60
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.050001]  [<ffffffff8156930c>] ? __pm_runtime_use_autosuspend+0x5c/0x80
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.050007]  [<ffffffff815d01d5>] sd_probe_async+0x115/0x1d0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.050021]  [<ffffffff810a35d8>] async_run_entry_fn+0x48/0x150
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.050025]  [<ffffffff8109a555>] process_one_work+0x165/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.050029]  [<ffffffff8109a8bb>] worker_thread+0x4b/0x4c0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.050033]  [<ffffffff8109a870>] ? process_one_work+0x480/0x480
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.050035]  [<ffffffff810a0be8>] kthread+0xd8/0xf0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.050038]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.050043]  [<ffffffff8183ca0f>] ret_from_fork+0x3f/0x70
    Apr  7 00:02:53 xTuple-HyperV-VM kernel: [57240.050048]  [<ffffffff810a0b10>] ? kthread_create_on_node+0x1e0/0x1e0
    Apr  7 00:05:01 xTuple-HyperV-VM CRON[11926]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
    Apr  7 00:09:29 xTuple-HyperV-VM systemd[1]: Started CUPS Scheduler.
    Apr  7 07:17:11 xTuple-HyperV-VM rsyslogd: [origin software="rsyslogd" swVersion="8.16.0" x-pid="701" x-info="http://www.rsyslog.com"] start
    Apr  7 07:17:11 xTuple-HyperV-VM rsyslogd-2222: command 'KLogPermitNonKernelFacility' is currently not permitted - did you already set it via a RainerScript command (v6+ config)? [v8.16.0 try http://www.rsyslog.com/e/2222 ]
    Apr  7 07:17:11 xTuple-HyperV-VM rsyslogd: rsyslogd's groupid changed to 108
    Apr  7 07:17:11 xTuple-HyperV-VM rsyslogd: rsyslogd's userid changed to 104
    Apr  7 07:17:11 xTuple-HyperV-VM systemd-modules-load[240]: Inserted module 'lp'
    Apr  7 07:17:11 xTuple-HyperV-VM systemd-modules-load[240]: Inserted module 'ppdev'
    Apr  7 07:17:11 xTuple-HyperV-VM kernel: [    0.000000] Initializing cgroup subsys cpuset
    Apr  7 07:17:11 xTuple-HyperV-VM kernel: [    0.000000] Initializing cgroup subsys cpu
    Apr  7 07:17:11 xTuple-HyperV-VM systemd-modules-load[240]: Inserted module 'parport_pc'
    Apr  7 07:17:11 xTuple-HyperV-VM systemd[1]: Started Load Kernel Modules.
    Apr  7 07:17:11 xTuple-HyperV-VM systemd[1]: Starting Apply Kernel Variables...
    Apr  7 07:17:11 xTuple-HyperV-VM systemd[1]: Mounting FUSE Control File System...
    


    Friday, April 7, 2017 8:54 PM
  • Hi Sir,

    Have you tried to backup Linux VM on hyper-v host level ?

    Regarding linux VM on hyper-v I'd suggest you try to get further assistance in the forum below:

    https://social.technet.microsoft.com/Forums/windowsserver/en-US/home?forum=linuxintegrationservices&filter=alltypes&sort=lastpostdesc

     

    Best Regards,

    Elton


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Monday, April 10, 2017 12:53 AM
    Moderator
  • I was reading this forum post.. 
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/1ec0291b-482d-4b03-afb1-a853b098ec5b/ubuntu-crash-during-hyperv-backup?forum=linuxintegrationservices

    and I think this might be a known issue - see the following bug report:
    https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1679898

    Monday, April 10, 2017 12:45 PM