What's the worst disk latency you've ever seen?

I know. I've left it in the hands of the vendors. Both of whom seem to be completely clueless . . . . oh well, I'm billing....
 
I have seen those numbers unfortnatuly...... to the point the guest VMs would BSOD.

There can be a lot of causes. ESX Top i syour freinds. Our ended up being a combination of front end SAN port saturation and misconfigured array groups. What kind of storage subsystem are you working with? are you seeing queuing at the vmkernel or at the disk?
 
What San do you have?
What is the disk configuration? # of disk per lun, lun size, how many vms per lun.

Looks like there is lot of disk contention. Reallocating certain vms to different luns should allow you to increase performance for those io intensive applications while retaining steady performance for other vms
 
This is the "SAN": http://www.mdi.com/documents/New ES14xx-DAR Overview_.pdf
Specifically the ES1450-DAR model. It's a SuperMicro with 4GB RAM, dual quad-core Xeons, 2x 36GB SAS in RAID1 for the OS (WSS 2003 R2) and 14x 500GB SATA drives tied to an Adaptec 3805 with 128MB cache and no battery. Underwhelming to say the least.

Disk configuration is 12x 500GB SATA in RAID6 with 2x hot spares. iSCSI LUNs are presented via the WSS iSCSI target service. The LUNs are actually .VHD files which just blew my mind. The vendor originally setup 3x 1TB "LUNs" and spread out 12x VMs across the LUNs. I setup another one that was 300GB a few months back to setup a new server in. I looked at their configuration and was just baffled. I'm no storage guru-master but I can usually figure out most things when dealing with actual enterprise-class equipment.

What really sucks about this whole thing is that in order to get any support from VMware or the storage vendor I have to go through the company that sold the client this garbage since they hold all the support contracts for this and most of their other hardware/software for their LOB applications. I won't say who this vendor is but I will say the client is a bank. I'm sure you can figure it out if you tried. So, I've spent pretty much the whole day jumping through hoops and talking to this one tech from the company that was actually not an idiot. But then he has to open support case with VMware, MDI and then the other team that actually installed the gear almost three years ago.

Just got an update from VMware, by proxy, saying that the latency is definitely on the storage itself. They went through three different sets of logs I sent and can find no issues with network or anything else, just the storage. Of course, the storage manufacturer doesn't have after-hours support. So now I'm guessing that the Adaptec 3805 is shitting the bed. I see myself having a very exciting weekend...
 
Here's a sampling of the vmkernel logs. It's literally pages and pages of this over and over again.

Code:
Feb 12 00:45:53 PROD-ESX vmkernel: 118:04:29:07.578 cpu5:58605)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:45:54 PROD-ESX vmkernel: 118:04:29:08.577 cpu1:4179)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:45:55 PROD-ESX vmkernel: 118:04:29:09.678 cpu7:58683)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:45:56 PROD-ESX vmkernel: 118:04:29:10.589 cpu0:58552)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:45:57 PROD-ESX vmkernel: 118:04:29:11.623 cpu1:58572)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:45:58 PROD-ESX vmkernel: 118:04:29:12.583 cpu6:58145)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:45:59 PROD-ESX vmkernel: 118:04:29:13.596 cpu0:58680)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:00 PROD-ESX vmkernel: 118:04:29:14.579 cpu7:58648)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:01 PROD-ESX vmkernel: 118:04:29:15.597 cpu7:58605)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:02 PROD-ESX vmkernel: 118:04:29:16.595 cpu3:4179)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:03 PROD-ESX vmkernel: 118:04:29:17.586 cpu7:58683)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:04 PROD-ESX vmkernel: 118:04:29:18.578 cpu0:58552)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:05 PROD-ESX vmkernel: 118:04:29:19.578 cpu1:58572)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:06 PROD-ESX vmkernel: 118:04:29:20.577 cpu6:58145)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:07 PROD-ESX vmkernel: 118:04:29:21.577 cpu0:58680)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:08 PROD-ESX vmkernel: 118:04:29:22.581 cpu2:58648)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:09 PROD-ESX vmkernel: 118:04:29:23.589 cpu6:58605)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:10 PROD-ESX vmkernel: 118:04:29:24.577 cpu3:4179)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:11 PROD-ESX vmkernel: 118:04:29:25.578 cpu7:58683)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:12 PROD-ESX vmkernel: 118:04:29:26.577 cpu3:58552)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:13 PROD-ESX vmkernel: 118:04:29:27.600 cpu0:58572)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:14 PROD-ESX vmkernel: 118:04:29:28.577 cpu3:58145)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:15 PROD-ESX vmkernel: 118:04:29:29.578 cpu4:58680)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:16 PROD-ESX vmkernel: 118:04:29:30.578 cpu2:58648)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:17 PROD-ESX vmkernel: 118:04:29:31.581 cpu7:58605)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:18 PROD-ESX vmkernel: 118:04:29:32.579 cpu3:4179)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:19 PROD-ESX vmkernel: 118:04:29:33.577 cpu7:58683)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:20 PROD-ESX vmkernel: 118:04:29:34.633 cpu2:58552)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:21 PROD-ESX vmkernel: 118:04:29:35.577 cpu0:58572)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:22 PROD-ESX vmkernel: 118:04:29:36.577 cpu2:58145)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:23 PROD-ESX vmkernel: 118:04:29:37.577 cpu4:58680)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:24 PROD-ESX vmkernel: 118:04:29:38.578 cpu3:58648)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:25 PROD-ESX vmkernel: 118:04:29:39.579 cpu0:58605)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:26 PROD-ESX vmkernel: 118:04:29:40.581 cpu2:4179)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:27 PROD-ESX vmkernel: 118:04:29:41.577 cpu7:58683)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:28 PROD-ESX vmkernel: 118:04:29:42.578 cpu0:58552)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:29 PROD-ESX vmkernel: 118:04:29:43.578 cpu2:58572)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:30 PROD-ESX vmkernel: 118:04:29:44.577 cpu3:58145)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:31 PROD-ESX vmkernel: 118:04:29:45.577 cpu4:58680)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:32 PROD-ESX vmkernel: 118:04:29:46.578 cpu7:58648)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:33 PROD-ESX vmkernel: 118:04:29:47.577 cpu1:58605)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:34 PROD-ESX vmkernel: 118:04:29:48.577 cpu2:4179)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:35 PROD-ESX vmkernel: 118:04:29:49.577 cpu6:58683)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:36 PROD-ESX vmkernel: 118:04:29:50.577 cpu3:58552)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:37 PROD-ESX vmkernel: 118:04:29:51.578 cpu2:58572)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:38 PROD-ESX vmkernel: 118:04:29:52.577 cpu5:58145)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:39 PROD-ESX vmkernel: 118:04:29:53.631 cpu2:58680)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:40 PROD-ESX vmkernel: 118:04:29:54.606 cpu1:58648)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:42 PROD-ESX vmkernel: 118:04:29:56.424 cpu1:58605)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:42 PROD-ESX vmkernel: 118:04:29:56.598 cpu2:4179)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:43 PROD-ESX vmkernel: 118:04:29:57.869 cpu5:58683)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:44 PROD-ESX vmkernel: 118:04:29:58.582 cpu5:58552)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:45 PROD-ESX vmkernel: 118:04:29:59.580 cpu1:58572)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:46 PROD-ESX vmkernel: 118:04:30:00.577 cpu3:58145)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:47 PROD-ESX vmkernel: 118:04:30:01.578 cpu3:58680)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:48 PROD-ESX vmkernel: 118:04:30:02.577 cpu2:58680)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:49 PROD-ESX vmkernel: 118:04:30:03.577 cpu1:58648)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:50 PROD-ESX vmkernel: 118:04:30:04.578 cpu1:58605)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:51 PROD-ESX vmkernel: 118:04:30:05.570 cpu0:4096)VMNIX: VmkDev: 2891: a/r=2 cmd=0x28 sn=752228 dsk=vml0:7:0 reqbuf=ffff81001f3b1480 (sg=1) 
Feb 12 00:46:51 PROD-ESX vmkernel: 118:04:30:05.570 cpu2:4111)ScsiDeviceIO: 770: Command 0x28 to device "naa.60003ffd427cd3e4bd8db308147e5e93" failed H:0x5 D:0x0 P:0x0 Possible sense data: 0x2 0x4 0x0. 
Feb 12 00:46:51 PROD-ESX vmkernel: 118:04:30:05.570 cpu2:4111)WARNING: NMP: nmp_DeviceStartLoop: NMP Device "naa.60003ffd427cd3e4bd8db308147e5e93" is blocked. Not starting I/O from device. 
Feb 12 00:46:51 PROD-ESX vmkernel: 118:04:30:05.570 cpu0:4096)VMNIX: VmkDev: 2936: abort sn=752228, vmkret=0. 
Feb 12 00:46:51 PROD-ESX vmkernel: 118:04:30:05.578 cpu3:58145)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:51 PROD-ESX vmkernel: 118:04:30:05.578 cpu6:4222)WARNING: NMP: nmp_DeviceAttemptFailover: Retry world restore device "naa.60003ffd427cd3e4bd8db308147e5e93" - no more commands to retry 
Feb 12 00:46:56 PROD-ESX vmkernel: 118:04:30:10.571 cpu0:4096)VMNIX: VmkDev: 2984: abort succeeded. 
Feb 12 00:46:56 PROD-ESX vmkernel: 118:04:30:10.577 cpu0:4096)WARNING: NMP: nmp_IssueCommandToDevice: I/O could not be issued to device "naa.60003ffd427cd3e4bd8db308147e5e93" due to Not found 
Feb 12 00:46:56 PROD-ESX vmkernel: 118:04:30:10.577 cpu0:4096)WARNING: NMP: nmp_DeviceRetryCommand: Device "naa.60003ffd427cd3e4bd8db308147e5e93": awaiting fast path state update for failover with I/O blocked. No prior reservation exists on the device. 
Feb 12 00:46:56 PROD-ESX vmkernel: 118:04:30:10.577 cpu0:4096)WARNING: NMP: nmp_DeviceStartLoop: NMP Device "naa.60003ffd427cd3e4bd8db308147e5e93" is blocked. Not starting I/O from device. 
Feb 12 00:46:57 PROD-ESX vmkernel: 118:04:30:11.578 cpu0:58683)WARNING: ScsiCore: 1119: Invalid sense buffer: error=0x0, valid=0x0, segment=0x0, key=0x2 
Feb 12 00:46:57 PROD-ESX vmkernel: 118:04:30:11.578 cpu7:4222)WARNING: NMP: nmp_DeviceAttemptFailover: Retry world failover device "naa.60003ffd427cd3e4bd8db308147e5e93" - issuing command 0x410002068b80 
Feb 12 00:46:57 PROD-ESX vmkernel: 118:04:30:11.578 cpu7:4222)WARNING: NMP: nmp_DeviceAttemptFailover: Retry world failover device "naa.60003ffd427cd3e4bd8db308147e5e93" - failed to issue command due to Not found (APD), try again... 
Feb 12 00:46:57 PROD-ESX vmkernel: 118:04:30:11.578 cpu7:4222)WARNING: NMP: nmp_DeviceAttemptFailover: Logical device "naa.60003ffd427cd3e4bd8db308147e5e93": awaiting fast path state update...
 
Well while the 3805 is a POS, the fact that all of your disk is in a 12x500gb RAID6 is the main problem. Why don't you setup 3x 4disk RAID5's with 2 Global Hotspares? Then setup three 1.5TB LUNs. With a 1:1 Lun/Raidset relationship you will be able to run more VMs without them interfering with each other.
So lets you have LUNs A,B,C and VMs 1-10 on each LUN.
So lets say its month-end and everyone and their dog is hitting the database on VM-A6 this will create an IO strain on just that one raidset LUN and B* & C* VMs IO performance will remain unaffected because they are located on an entirely different set of disks.
Right now if VM6 starts getting hit hard with IO requests it puts a strain on the performance of ALL of your VMs.

I should also mention that while in a lot of scenarios its beneficial to add spindles to increase IO but since the 3805 sucks and im guessing your 500gb SATA drives have low density platters you have hit an IO wall. I had a 31605 and once i had 4 spindles i had diminishing returns when adding more disks to the array. I would be willing to bet that a 4x500gb RAID5 would bench equally well on that same controller as your 12x500gb RAID6.

And before someone chimes in a says that RAID5 isn't safe anymore because of a URE that could cause ultimate array failure; that doesn't really apply here because were talking 1.5TB arrays; not 1.5TB disks in a 20TB array. Secondly, it is so incredibly unlikely to occur that if it does you better run to the nearest gas station and buy a powerball ticket because mathematically you have better odds of winning the powerball in back to back drawings before having a URE cause total failure for an array. And if it does you can just restore from backups (If they don't have that, i would start there before worrying about latency issues).

Now its a been a while since i used WSS2003R2, does that include UNIX File Services? If so i would seriously consider setting up NFS and dumping iSCSI. That alone will give you better performance.
 
WSS 2003 R2 looks like it has NFS support but I think it's like v2 or something. The problem with setting up the different RAID sets and/or just switching to NFS is that all changes to the storage have to be approved by the vendor because it falls under their support contract. If I had my druthers I'd wipe the damn thing, install OpenFiler and use their software RAID and NFS. Keep them going on that until they can squeeze enough money together to go buy a NetApp or at least some SAS DAS box. Anything to get rid of this POS...

At any rate, the vendor claims they have hundreds of these systems in the field running twice the load and this is the only site that seems to have problems. For some reason the client trusts these ass-hats even though they've been getting raped for years by then. We'll see what happens. They'll probably send someone out on Monday to swap out the RAID card during business hours and break everything. :D
 
what is running on this box? any intense I/O systems?

Oh yeah, forgot to add that there is one server running DB2 databases. 2x SQL servers, WSUS and File Server, two internal web servers, and six other application servers. So, yeah, shitty RAID card + SATA not really a good choice here.

Thing is that it had been running what I would call "acceptably" in that no one complained about anything for the last year or so. We started replacing their Server 2000 based VMs with 2008 or 2008 R2 one at a time and noticed this problem when we tried to clone a template and it brought the entire network to its knees. Was fun.
 
I have seen latency that high at my work-place -- it's actually on a NetApp 3020 strech cluster with 6 shelves of disks per head.

Honestly -- I would avoid netapp, we were having such poor performance across the board the highest we could get out of the SAN was ~3300 IOPs. (all of our disks are 'aligned')

We also hit the bug with iscsi resets which was a disaster.

We actually just finished testing a rather inexpensive Isilon NAS -- we have ~40TB of storage in this new cluster for the same cost a head upgrade was going to cost for the netapp -- and it performs faster than the netapp easily. Also, NFS license costs a ton on NetApp.

Infact, for a DR site solution, I spec'd and built 2 identical openfiler (~$5K each) boxes that using a single GbE port performed better then the quad-gbe ports on the netapp.

IMO - NetApp is overpriced for what you get. Check out Isilon - I will answer any questions I can -- Or just build a DRBD Openfiler cluster.

I am in no way affiliated with Isilon or Netapp except in that I use them for my work server environment.
 
I would guess your issues with NetApp were configuration related. I support dozens of FAS2020/2040/2050 units in the field and have no complaints.
 
Well, if it was a Configuration issue, NetApp Engineers couldn't figure it out. We spent probably combined 60+ hours on support/conf calls - we sent them logs, autosupport data - everything and they could not find an issue. In fact just today a co-worker was deleting 230+GB of data and caused iscsi resets across the environment.

So if you have a product, and the company that makes / designs can't support it; why are we paying thousands of dollars a year for support.....

Not to mention for what you get is not worth it.

Sorry if this comes off as angry, but I've been called way to many times a 3am because of some netapp related issues: like vm's pausing and not responding to pings as they wait for disk access to be available, losing sync with time, so netiq alerts that it cant get system uptime etc etc....
 
Fair enough. Everyone has horror stories about some vendor. My experience has been different with NetApp, however, I will agree that they are over-priced. But, then again, so is EMC and every other storage vendor when you consider they're all basically running linux and software RAID on custom equipment. Though there is the XP Embedded on a certain vendor that cracked me up...
 
Update: Vendor has isolated the problem to read operations on the array. Still waiting to hear back on what they plan to do to fix it. This should be interesting...
 
The worst latency I have seen if when I shipped a floppy to my cousin on the other side of the country.

j/k. :p
 
Back
Top