Windows XP Professional x64 edition supported up to 128GB and when it was released, some motherboards could be populated to that amount, so all was fine. Vista x64 was released and kept the same limit, even though motherboards have evolved, so even though scRGB support was offered in Vista, I just waited to see what would happen with Windows 7. A dual-processor Nehalem EP motherboard can support 288GB, and some software like Panorama Factory can handle over 128GB with no issue, as not constrained by the "famous llp64 ecosystem". WS2003 (same codebase as XP x64) had up to 2TB support but I want a workstation OS with all the associated support and task optimization (drivers, application software compatibility, etc...) Please advise. Thanks !
- Changed type Mark L. FergusonModerator Friday, February 20, 2009 1:58 AM
Vegan, I am with you...
We had WQUXGA monitors in 2001 (3800 x 2400) and Super HiVision video in 2005 (7680 x 4320) , not to mention next year's 28K from RED http://www.red.com/epic_scarlet/ and then deep color, scRGB, HDR and the native RAW codecs of Windows 7, all making memory capacity a huge priority, especially for RAM disks given the issues of context switch on threads with conventional disks....
At least the heap was seriously boosted when XP x64 came out, but an OS that supports less than half a fully populated dual-processor motherboard might soon draw more attention then octuplets !!!
It is all good, but why you think W7 is the problem?
memorized there is hardware RAM disks, so if you want buy RAM HDD and do not use RAM to make virtual drives
Quality of pictures you see on your display depends of capabilities of your video card, but not of system RAM
Vegan Fanatic who told you that your linux server can see and use 18 EB? Why 18 EB? Why not 1000 YB?
It is all the same because cannot test it right? And even you can who tells you it will work?
Linux server may not be limited but what will you say for linux for desktop?
CPU can address 18EB address space, ok, but where is memory controller that can support it?
It is good to talk but when words match facts!
This is technical forum, but not fantasy forum right? :)
did you notice not every WS2003 Server supports 2 TB? Look at the WS2003 Standard for example, please.
The different memory limitations at different SKUs is normal practice for market segmentation.
Unfortunately workstation with more than 128 GB RAM is very-very untypical user scenario so I'm sure the will no >128 GB support not at Win7 but at Win8 too. I'm sorry.
your problem is different, it is hardware problem and you may check Memory Remap option or similar in BIOS to enable it.
It is better to create a new topic if this will not help.
Even if your SSD (what you call RAM HDD) under a RAID configuration can give me 12GB/s, you still have latency, translation layers issues, and until MRAM, the limited writes. Also just figure how much 288GB would cost me, with the Cenatek or Acard drives in RAID array in comparison to DDR3.
As far as the quality of pictures I was referring to the source ; you definitely can see the Presidential inauguration picture on gigapan.org with a 800 x 600 display and 65536 colors and a 1MB video card, but handling the original image or its workflow takes a bit more than the display ; also printing does not depend on my video card. Imagine a 9 x 6 stitch of mid-resolution pictures like gigapxl.org (4-year old asymagon lens, etc...), with 16-bit color depth and tell me how you would go about this workflow with 128GB.
The market segmentation is the whole issue ; what's untypical to you might not be untypical to others. Like when someone said a while back that no one would ever need more than 640K ; unfortunately not everything was written in handcrafted machine language code, double-checking assembler output and doing proper memory housekeeping after application or process terminates, etc..., and the memory pollution and fragmentation aspects became more prevalent over the years. Don't reboot for 6 months and look at your memory usage ; also workstation users might not be interested in server OS, and need different task prioritization, kernel and user ecosystem that's taylored for their needs. Also I noticed that even you have the same codebase, such as XP x64 and WS 2003 x64, there are different drivers and substitution does not always work. Vista might have been successful, but I am not sure that capping workstation OS to 128GB will be.
you add one word (ever) and this changes the old words' meaning very much.
Well I often do not reboot for month or two or more until update requires rebooting and have no problem with memory usage. Memory pollution... hmm, I do not like bad-written programs. ;)
The memory fragmentation really is nit an issue in most cases.
128 GB limit for workstation Windows OS will be increased of course but not in next few years.
The terminology varies, but the concept is comparing a magnetic hard drive with possibly 32MB of cache and RAM accessed via IMC or via SATA (like acard or gigabyte i-ram). Even assuming the SATA physical layer translation and protocol overhead is as good as SATA, you'll probably get at best 1.2GB/s with 4 acard 9010 in a raid configuration. This is far from what the integrated memory controller in the latest crop of Intel or AMD processors can achieve.
The 4 9010 will be about $1000, whereas I can use the motherboard sockets for faster access.
As far as picture size, no one will dictate how many pictures I want to stitch together, even though I bet it's gonna be less than maps.live.com
I mention the printing for the sake of resolution and rendering, even though I direct the spooler to a RAM disk like superspeed...
What's your opinion of DMA abstraction performance in contrast with raw IOMMU over 4GB ?
Thing are not so simple of course
If you use RAM HDD all 128GB RAM are free for processing data if there is software can use them
With 128GB free RAM you can stich together as many pictures as you want :)
You can use 8 or even 16 port RAID controller to build RAID0 array
If you want you can. There is not limit for imagination
I have to chime in on this one.
In reality, 32bit Windows can actually see on upwards to 64GB of RAM. Don’t believe me; load up Windows Server 2003/2008 Enterprise (32bit). The reason for the artificial limitation on the desktop is due to the driver landscape. Mark Russinovich can explain it much better than I. See: http://blogs.technet.com/markrussinovich/archive/2008/07/21/3092070.aspx.
EFI is there. You can easily create a drive to utilize GPT (GUID Partition Table). Nothing is stopping you.
What is RAID used for? The real reason for using RAID and the not the prevailing belief of how RAID should be implemented. How many people know the real ramifications, its benefits and detriments, to using RAID 0?
Your OS SHOULD NOT BE INSTALLED ON A RAID 0 ARRAY!!! Never. The OS should, for redundancy purposes, be installed on a RAID 1 (mirrored) array. The data that the server or workstation uses should be installed on a RAID 5 array—following the installation of the OS (unless hardware based). When do you use RAID 0? When I/O performance trumps all other considerations to include system reliability and recovery. An example would be to capture real-time, uncompressed HD video (1280x720 at 60fps or 1920x1080 at 30 fps) for editing and post production. Even then, the user needs to make provisions for backing up the data to a server or some other location in the event of a catastrophe.
As for your monitor issue, you need to investigate that. At work I have a Dell Precision 690 with 16GB of RAM, an NVIDIA Quadro FX4600 card that is pushing two 30” Dell monitors at their native resolution of 2560x1600 each (a total desktop resolution of 5120x1600). I have no problems or issues at all running Windows 7 x64. On one display I can have Adobe Photoshop CS3 up and running along with Autodesk 3DS Max 2009 SP1, while the other display is running Adobe Premiere CS3 capturing real-time video from a HDMI input capture card.
If all of that is too limiting, well you always have http://distrowatch.com/.
well my understanding is that win 7 and windows server '08 share the same codebase, and the max ram supported by server '08 is 2TB. That said, while recent AMD chips can address 256TB, Intel chips are physically limited to only addressing 1TB, and early EMT64 chips only 64GB.
that said, '03 server standard was limited to 16GB, '03 server R1 32GB, enterprise edition 64GB, R2 1TB.
in '99 you could get DEC alpha workstations with 64GB of rdram.
All of the AMD CPU's have an 1TB physical address range (40-bit) and a 256TB (48-bit) virtual address range. If you have other technical information please post it here.
But there is more, none of AMD's can 'drive' more than 4 memory slots, that means max. 16GB (4x4GB DDR2) for AM2+ socket CPU's and 64GB (4 x 16GB DDR3) for AM3 and F socket processors.
None of the Alpha chips can (could) address more than 8GB, DEC's Alpha Station ES40 model can hold 4 sockets with each socket driving 8GB maximum (8x1Gb), so the maximum for the largest Alpha station (memory wise) was 32GB. This box wasn't yet available in 1999, though and we had to wait until 2003 before the extremely expensive 1GB sticks became available.
Why is section 2.6.7 (page 68) stating 48-bits (Barcelona & Shangai) physical address range [256TB] ?
Also AFAIK the Nehalem EP & EX will offer 44-bits [16TB] physical range.
Understood that the IMC "load" capacity might require efficient and dense memory to take advantage of the design, but as Vegan infers, it's possible that the current recession induces manufacturers to outdo themselves and precipitate MRAM or more recent designs and spend more time in the labs than in court to all's benefit, so I would not preclude that the Win 7-era processors could be fully populated before EOL.
Concerning the PAE and the 128GB paging of a 32-bit OS, I am not sure we'll miss it, yet I know x64 is largely an x86 extension rather than a native 64-bit design. The same applies to heap and other handle constraints in 32-bit OS.
Then concerning the Workstation vs Server OS, not sure what the difference is between a WS2008 x64 and Vista or W7 x64 driver for the same device, but maybe it had to do with paged pool and the 128GB boundary...
This (section 2.6.7) is somewhat misleading, in some documents they mean Physical DRAM address ranges while in other they refer to the System's Physical address ranges. The 48-bit "coherent" physical address range is used to map DRAM and MMIO. Only 40 bits (max.) are used for local DRAM access (used by the processors DRAM Controller (DCT)), the remaining 8 bits are used by the Memory Controller (MCT) to route memory requests. One possible route is another processor's DCT in a MP system. That means that a single processor can theoretically address up to 1 TB (40-bit) of local DRAM memory, and up to 4TB of system memory in a coherent fabric system (4 Processors). Note that the virtual address space of an x64 based OS, is 8TB, quite larger than 1TB, but you know where you are ending when allocating more than physically available in the system.
About Nehalem EX/EP, it looks like the address range is 44/48 (physical/virtual), but again it's not clear (to me at least) how these will map to the local (integrated) DRAM controller, but my guess is 40 bits or max. 1 TB for DRAM, which is a fair number for a single processor (8 cores) system. Sure, the current recession will have a serious impact on the future developments in this area, who say's we will see any Nehalem EX chip and systems hitting the streets this year.
The drivers for the same device on W7 and Server 2008-R2 are (or can be) exactly the same, provide we are talking about x64.
Igor, I worry for you.... Untypical today can be standard in no time. Developers and manufacturers are watching users and economics. The initial cost of the latest technology stops most people from using it but production costs drop and user expectations rise very quickly. I doubt that there is an upper limit to how much memory and processing power we would use if it were available. Surely, rather than assuming we have a normal bell curve where only a few require high-end machines, we are facing the exponential graph where eventually everybody will NEED more processing power than anyone can imagine. Much like the phone and motor car; once a luxury now a neccessity, in future a question of survival. Surely any smart programmer or manufacturer is leaving a bit of room for growth.
That is the cute little plankton of today is the blue whale of the future, don't try to grow it in a fish tank.