Keeping computer from waking immediately after it sleeps!

Bought a new computer a couple months ago and its nagged me since, that about half the time after the machine goes to sleep, it immediately wakes up.  Windows 10.  I’ve “solved” this so far, by holding the power button down until power is removed and that makes the machine quiet and without lights, but a better answer is to figure out why this thing is waking up immediately after sleeping.  The diagnosis and fix are detailed below.

Machine is an IBuyPower gaming computer, Windows 10 64-bit.  Comes with a very good Nvidia graphics card and big quiet fans, so winner.  Generally a good box, the machine has a bad habit of waking up immediately after it goes to sleep and that’s a headache.

Machine wakes up immediately after sleeping 

Click on Start / Power / Sleep and the computer immediately wakes.  This is normally followed by me pressing the power button for 5 seconds to tell the power supply to cut power to the machine and that generally makes the computer quiet, but this is not the right solution or behavior.

Took some time today to debug it and the problem is now solved.

Step 0 – Verify the machine is configured to sleep 

Start / Settings / Power & Sleep / Additional power settings (over on the right side)  / Choose what the power buttons do

And survey says that the power buttons are configured in Windows to tell the computer to sleep, okay, good.

Power Settings – Sleep when press power button

Step 1 – Figure out why the computer woke up 

Start / cmd (run as administrator).  It runs, then elevated power, run eventvwr.msc

This launches Microsoft Management Console, event viewer.  Scroll to System tab and scroll back in time to find out when and why the computer most recently woke from sleep.  Answer in this case is that the machine went to sleep at 23:57 UTC and woke up 6 minutes and 3 seconds later.  Odd, I thought it was 3 seconds total.  No matter, the time of this is just a curiosity, the real problem is that the event viewer says that it DOES NOT KNOW why the machine woke up. 

Event Viewer wake source not helpful


 Step 2 – Never trust a GUI 

 Command line tools can ask the system the same question and here, get a more helpful answer

C:\Windows\system32>powercfg -lastwake
Wake History Count - 1
Wake History [0]
  Wake Source Count - 1
  Wake Source [0]
    Type: Device
    Instance Path: PCI\VEN_10EC&DEV_8168&SUBSYS_E0001458&REV_16\01000000684CE00000
    Friendly Name: Realtek Gaming GbE Family Controller
    Description: Realtek Gaming GbE Family Controller
    Manufacturer: Realtek

OKAY – We have a hint.  Took me a while to figure out that “Gaming” in this case was just there to confuse matters, the real key is the “GbE”, which means Gigabit Ethernet.  Off to Device Manager to find this device and see if it is configured as a “wake source” for power management.

 Step 3 – Off to device manager to fix it 

Start / devmgmt.msc (run as administrator).

Locate network adapters in Device Manager. 

Device Manager – Network adapters

Find the Realtek Gaming GbE Family Controller, select it’s properties 

Clear checkbox – do not allow this device to wake computer

And success! Clear the checkbox for “Allow this device to wake computer” and the problem is solved.  If I were an enterprise, I might have some use for wake on LAN or similar, but here, I do not want the network adapter to be able to wake my machine and certainly don’t want it to in this case as it appears the Realtek Gigabit Ethernet is waking the computer for just about no reason at all. I hope this helps other people experiencing the same.

Joe Nord

Originally posted April 16, 2021

Stop windows from relaunching applications

When Windows 10 starts and users logon, the operating system by default will automatically restart all the applications that were running when the system was rebooted including launching them to reload the documents and revisit web browser URLs.  Handy in concept this is not how I want the machine to behave.  I would like a clean desktop, that looks the same everytime the machine boots. Thankfully, there’s a setting for this, not well advertised, but it does exit.

Search internet as I could, couldn’t find the answer, but after posting the question on twitter, the  question was quickly solved by @JenMsft on the Windows shell team.  

Answer: Settings, Accounts, Sign-in options, scroll down, “Use my sign-in info to automatically finish setting up my device and reopen my apps after and update or restart”.  By default, this is enabled, click the button to turn if off and everything now works … “as it should”.

Once I knew what to search for, was able to find the online documentation.  The option exists only in machines not part of a domain.  Appears I’m more of the enterprise user even on home computer. 

Warren Simondson  @Caditc  noted that that changing this setting enables or disables the following item in registry.

Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\UserARSO\<usersid>\OptOut

DWORD setting (boolean).  1 means “opt out” => do not restart applications at machine start.

As a once upon a time programmer of User Profile code, this has just become neat!  Why the in wholly h@ll is WinLogon using a HKLM based setting to control the behavior of a per-user setting when the user registry must already be loaded before launching the applications?   I note that this makes it really difficult to place this opt out into a default user profile to make this standard policy for the machine.  If anyone has good ideas to why the machine registry was used for this per-user setting, do please add them in the comments below.

Final thought: the above prevents a machine wide default of opting out.  There “could be” a separate setting to make that global and if I had to guess, it would be in the same WinLogon space with no SIDs.  Have not had a chance to check it, but its a possibility.

Joe Nord

Originally posted July 22, 2018

Explorer shows MP4 internal date rather than date of file on disk

Among the jobs of an operating system is to maintain the disk file system including the directory structure, file names and for each file, the file size and the pertitent date information for when the file was created, modified and potentially accessed.  I have been editing some videos lately and after producing a MP4 using ffmpeg; I used PowerShell to set the file create and modified times to the date that the video was recorded.  This makes it easier to sort and allows viewing software to display timelines, good things.  MP4 files also have an internal date that is set by cameras to note the datetime of when the video was recorded.  I was surprised to learn that Windows Explorer displays the MP4 date rather than the file date and this can be problematic if the MP4 file does not contain valid date data.  This post describes the issue in detail and provides steps for adjusting all the dates to the same datetime.

Notice the use of term “datetime” rather than “date” and “time”.  In Windows and its probably an ISO standard, computers store file dates and times as one field, which on Windows NTFS is a 64-bit signed count of 100ns periods since January 1, 1600.  The point is that it can handle splitting a second up into 100 pieces intervals per second and 64-bits is a bit enough space that it can accurately store both date and time for a very long time.  As users, we never see this and as programmers, we usually don’t need to worry about the detail of the field, its a 64-bit number that represents the file’s date and time in one go, “datetime”.  Since it’s one field, comparisons of before, after and same become very easy and can be done in one operation and since its signed, you can “”subtract” datetimes to figure out how long it was between two times.

Windows stores 3 datetime fields for every file on disk

  1. Created
  2. Modified
  3. Accessed

MP4 stores its own datetime of when the file was created – normally stored by a video camera assuming you remembered to set the clock!

On Windows NT systems, the Create date is not the date that the file was first created, it is the date that this specific file was created, so if you copy the file from one location to a new location, that new file will have a fresh Create date equal to “now” and if you then edit the image/video with a program, this will update the “modified” date and all concept of when the file was created will be lost.   Image and video cameras work to solve this by storing the date that the picture was taken of video recorded directly inside the image/video file.  

As a side note, the technique in PowerShell to adjust the fle create date back to what it is supposed to be is:

$fdate = "1941-12-07 07:00"    
$dname = Get-ChildItem ("Pearl*.jpg")
foreach ($name in $dname)
{
    Write-Output $name.FullName, $fdate;
    $name.CreationTime = $fdate
    $name.LastWriteTime = $fdate
}

In my recent case, the files were MP4, videos of the kids in their younger days and I had imported these from analog camera to computer and after processing through a few different tools, the last step was using ffmpeg to convert to MP4.  This worked great!  Then I set the CreationTime to the historic datetime of the video being recorded and … though I was done.

View with Windows Explorer and … the DATE DID NOT TAKE!

Explorer is showing that the video’s datetime is 12/31/1969 7:00pm.  That isn’t right!  

Compare the GUI view to command prompt view.  Never trust a GUI!

Command prompt has the “correct” datetime.

As the explorer GUI to show file details, and we have a match!  It is listed as “Media Created” date.

And use EXIFTOOL to show the innerds of the data and we have a winner! 

The dates inside the MP4 file are all zeros!  Actually, this makes sense.  I captured the video content from one tool, plumbed it through a few others, and out the other side, ffmpeg produced the MP4 file.  By the time it got there, the only record of the datetime that the video was recorded was in the name of the file.  The MP4 file has zeros as its internal record of when the file was created.

Sometimes things try to do too much

The job of an operating system is to maintain the file structure on disk.  Esoteric concepts like what is in the files for video editing applications, is IMHO an application responsibility.  BUT – Windows Explorer is trying to help out and is displaying the MP4 internal date rather than the Create, Modified, Accessed time from the file on disk.  Explorer is trying a bit to be a video display application, something that really isn’t its job.  Then again, it can also show thumbnails for images and … people like that.  Is it something that SHOULD be in an operating system?  The lines blur and we can all have a nice debate on whose job it is to display the video content and whose job it is to display the image/video content.

Back to reality – this is not showing the date that I want it to show and the solution is to either configure Explorer to show a different date field as date or to modify the MP4 files to have datetime information embedded.  “B” is the better solution.  I went looking on the internets and found this fine post on StackOverflow, link.  Bingo!  I am not the first person to have this problem.  In the post, Edward Brey even provides Visual Basic source code to modify the MP4 datetimes to be any date you want.  Awecome.  Took a bit to compile that up and ran it and everything looked good, but I wasn’t done.

In running the VB code to adjust the MP4 datetime, the file modified date was also modified, so I ran my PowerShell bat file one more time to modify the file date times and … by magic, the MP4 datetime is now incorrect. For some files, off by 5 hours, for others, off by 4.  A bit of study concluded that the 4 vs. 5 is daylight savings time or not of the date of the files set against the create time in the file.  When PowerShell (.NET) is adjusting the CreateTime, it is also adjusting the MP4 internal create date!  What?  Why.

  1. I did not ask PowerShell to adjust the datetimes that are in the file contents, only the file “date”.  But, the embedded date in the MP4 file did change!
  2. And we’re back to the job of operating systems vs. the job of applications, but this could be a very long diversion

And its worse.  The MP4 datetime was “correct” per me in the before and after setting the datetime of the file create date, the MP4 datetime was now incorrect by a period of time equal to number of timezones away from GMT on the date that the file was recorded.

I studied this for about a day and concluded that MP4 internal datetimes are the datetime that the file was created/recorded/modified and that time shall always be ZULU!  The VB code didn’t know if the time I gave it was local time or GMT/UTC/Zulu, so it went with what it had.  When the file CreatedData was set from PowerShell, the .net runtime decided to help out and adjust the MP4 datetime to match the create date on the file itself.  Okay, problem now understood and dates all adjusted.  What is “odd” is that if PowerShell and .NET adjust the date of the MP4 file when the internal date is concluded “wrong”, why doesn’t it also do that when the internal datetime is zeros?  Consistency here would be a win.

Please add your comments below.  If you want the compiled version of the program to adjust the MP4 internal dates, drop a line and I’ll send your way.  email to joe at this domain.

Joe Nord

Originally posted Dec 22, 2018

DOS New Executable .EXE file format

The DOS world’s need for memory grew and the 64KB available to .COM executables was no longer adequate.  The NE “new executable” executable file format was invented and uses the .EXE file extension rather than .COM.  The first 2 bytes of these files includes a tag at the front to identify the format and if you guessed that this was “NE” to denote “new executable”, you’d be incorrect.  The first 2 bytes of every .EXE file are “MZ” famously because the name of the programmer at Microsoft who wrote the code was “Mark ZbikoNwski”.

The EXE file format allowed multiple segments to be defined, and included ability for separate compilation of portions of the program, and SDKs.  That is, different parts of the program could be compiled into .OBJ files, and then a LINK step is performed to assemble the resulting EXE file.  This enabled many good things like separate compilation of varied portions of programs and the ability to purchase libraries of code from other developers without them having to provide source code.  The NE format also permitted programs to be “large”, occupying space to all the memory available on the DOS computers.

I intended to write a detailed description of this evolution of file formats here, but there’s no need, it’s been well done in detail by others and I provide here a links here

Cutting to the meat of it, the NE (MZ) format executable has these portions

  1. Header
  2. Relocation list
  3. Code

Which is really

  1. Header
  2. Relocation list
  3. <Code>
  4. [Code]
  5. […]

The Header includes information for allocating a heap and a stack.  One grows up, one grows down, when they collide, the application is out of memory.  Notice that this is still DOS so it isn’t like the operating system is going to do anything when the application collides it’s memory.  Still, the executable format is starting to grow into a real concept of an operating system, with a loader.

The Code and Relocation list can use a bit more description as there can be multiple code regions, each limited to 64-KB (size of a SEGMENT).

The executable is defined in segments, each of which is loaded into memory at a paragraph boundary (16 byte boundaries).  The SEGMENT of that paragraph of memory can be addressed using the segment registers as 16:16 segment:offset addressing converts to physical address by shifting the segment left 4 and adding the offset.  At this time in the life of Intel processors, there was no such thing as virtual addresses.  The 8086 CPU is a pretty straight forward machine.  Segment:Offset converts staight to physical and when the CPU addressed it, it actually went all the way to the ISA bus where memory would respond.

After loading each code segment into RAM, the DOS loader applies the fixup records so that code calling between segments can call the 16:16 addresses where the program segment is actually loaded at runtime.  There is NO provision for DLL’s or dynamic linking.

This file format was the primary format for DOS computers through the long life of the DOS operating system and it is still with us today.  The modern PE file format includes a NE/MZ format executable as a “DOS Stub” at the front.  This is primarily so that programs intended for Windows 3.11 or OS/2 could display a message along the lines of “this program is intended to execute under Windows” and then the stub terminates.  Creative programmers can use the DOS stub to run a DOS version of a program when on DOS and a Windows or OS/2 version of program when on those operating systems.  We’re on a journey here; PC operating systems are starting to look like “real computers”.   The next post will take us into modern times of about 1990.

DOS .COM file review

In the beginning, the DOS .COM file format was the format for executables of size less than 64KB and let’s face it, who would really need more.  I’m headed down a path to discuss PE file format executables and no good discussion is right without a foundation.  In the beginning, there was DOS; life was simple and there was nothing between you and the computer.  This post describes the early executable .COM file format showing code, data, everything that you need for small executables!

The .COM file format was simple.  The whole program had to be less that 64KB in size.  Technically less than 64KB minus 0x100 bytes, but stick with me.  To run a program, DOS allocated memory for the size of the file, loaded the entire file contents into memory and … branched to it via a “call” instruction.  Simple stuff, absolutely no runtime fix ups, no DLL records, no DLLs!, heck no linked operating systems APIs that you could call!  Need something in your program?  You should make it part of your 64KB because that’s all there is!

Going from memory today because a DOS computer is not readilly available, the offset (IP) at the start of execution was 100 hex bytes.  Notice I didn’t say into the starting segment, there was only ONE segment.  Code, data, everything in one place with absolutely no distinction between anything.  The verbatim bytes from the .COM file are placed into memory at CS:100, and DOS calls it.  Did the program work or not?  DOS didn’t care, it loaded it into memory and … “called” it.  Everything else was bonus time.  If the machine managed to not become a colored checkerboard mess during the life of the program, the program could eventually “return” and execution would go back to DOS who would put you back at the COMMAND.COM prompt.

Sample program

Though I have no DOS computer readily available, I do have old programs.  The smallest and easist for show and tell is one I wrote with some friends in 1988 to efficiently reboot a computer skipping the memory test during boot.  The program is … 16 bytes and performs the same activity as Ctrl-Alt-Delete including setting a flag to tell the BIOS to skip the memory test on boot.  Interestingly, we wrote this using debug.com and the “a’ command so there is no source and … there never was any source.  To show what does, need a disassembler that understands 16-bit intel little endian assembly language and DEBUG.COM from DOS would do it, but again, no DOS handy.  Found a suitable disassembler online at shell-storm, here, but first need hex. Using my own HexDump utility, get these hex bytes

*
* input file: boot.com
*
* OFFSET  +0       +4         +8       +C
 00000000 B840008E D8C70672 - 003412EA 5BE000F0
*
* 16 bytes converted

Feed that into the disassembler above and get the below.  I added the comments.

0x0000: mov ax, 0x40                ; Establish vision to the BIOS data area
0x0003: mov ds, ax                  ; Data segment is 40 equals physical address 400
0x0005: mov word ptr [0x72], 0x1234 ; set 40:72 to 0x1234 (BIOS Flag for fast reboot)
0x000b: ljmp 0xf000:0xe05b          ; Branch to BIOS "reboot" code

When run that, these 4 instructions are “it”. DOS loads the code into memory at offset 100h, and branches (calls) to the first byte.  That’s it.

These were simple times and it was downright impressive what could be accomplished in 64KB, or in this case, in 16 bytes.  This program reboots the machine, so there is no need for the code above to include return logic, but it could be done with a “ret” instruction or you could issue the DOS system call to terminate with a return code.

Calling runtime services

These early systems didn’t have library linkage or DLL fixups – there were no .LIBS or .DLLS and “that’s the way we liked it”.  If you wanted to call the operating system to do something useful like display a string or read a file, you loaded up some registers and kicked off an “int 21h” to call DOS.  YES – software issuing equivilant of hardware interrupts to call the operating system!  Its DOS, do what you want!  The DOS programming references told you what to put in registers before the call, issue the DOS interrupt and things happen.  Get the parameters wrong, the machine could be hosed – but hey!  It was your machine and there was very little that couldn’t be cured with a Big-Red-Switch or a Ctrl-Alt-Delete or as this program demonstrates, having a batch file call a program named “boot.com” to accomplish the same end.

Compatibilty with CPM

With a joke at the start of this post that that nobody would ever need more than 64KB, the real story of .COM in DOS is that it would provide application backward compatibility with the 16-bit (64KB is all you get) CPM operating system.  DOS was written to the abilities of the 8088/8086 Intel CPUs and these had an segment:offset based addressing limit of 20 bits equals 1MB.  The segment was shifted left 4 and offset added to get physical address.  CPM executable format was 64KB flat memory with no segments.  If you were designing DOS for an 8086/8088 and wanted backward compatibiltiy with the existing apps at the time, this made it possible.  Load an application into memory, pre-set the CS, DS, ES and SS segment registers to point to the loaded SEGMENT (paragraph boundary memory) and “call” the loaded program and it would run the whole time being blissfully unaware that the machine supported segment:offset addressing. Next, we move forward to the New Executables!

Joe Nord

Originally posted Feb 08, 2018

Outlook – Allow website to configure

Had an interesting one today when outlook 2016 prompted me to permit a website to configurre email settings for a user that isn’t me.  In this case, it was for a user that doesn’t work at my company anymore and this led me to ponder if someone in IT was trying to connect me to the no longer here user’s email inbox.  Answer, no.

There are many descriptions of this dialog on the web but they all seem to end with telling outlook to never do autodiscover; I didn’t want that, instead wanted to solve the issue and leave configuration items untouched.    Try all the easy things to get this dialog to go away, it continued to be shown on each fresh launch of Microsoft outlook, but the mystery is now solved.

The username@company.com email address presented by the dialog is 1) valid, 2) is not me, and 3) is for a user that no longer works at my company.  The email inbox link up didn’t make sense so I did some more digging and concluded that it was calendars.  Ctrl-2 to see calendars and view “shared calendars” and sure enough I have a non-checked item for this user.  I am not viewing his calendar, but I did once upon a time and outlook seems to remember that.  

Clear that user from the list of shared calendars, close outlook, reopen and … presto, problem solved.

Hopefully this helps someone else.  Enjoy.

Joe Nord

Originally posted Feb 07, 2018

2 comments

Comment from: Ian Visitor

Thanks, it seems to have done the trick. I had a work’s calendar in shared. I was reluctant to press okay as I’ve made the mistake of accepting similar before. I use my own laptop to read work’s emails and when pressing something like this in the past, meant the work’s IT took over the settings for my laptop.
So of course I was reluctant to do the same.

07/23/18 @ 02:24 pm

Comment from: Ian Visitor

spoke too soon…back again…sorry about that. 07/24/18 @ 10:19 am

Net Neutrality FCC vs. FTC

Net Neutrality is an emotional subject. It sounds like it is about equality and liberty for all, but that’s a simplification. As a foundation thought, this is not about “equality”, it is about which federal agency regulates the Internet, the FCC or the FTC?  Read this post and I hope it will bring some clarity to the discussion, and help the folks on both sides of the polar topic realize that they have more in common than they thought – keep the internet free!


The view of Net Neutrality proponents is that internet service providers (e.g. Comcast, AT&T) are in the business of delivering internet bits to / from users, and any filtering of these bits, or preference of some bits to other bits goes against the god given foundation of the US of A. This is ‘Merica!. Don’t mess with my internet!  Why then would anyone be interested in repealing Net Neutrality?

Some background on pricing

As home users, we pay a monthly fee and stream all we want. The ISPs average usage out over time and make sure they have the ability to deliver the bandwidth that their customers demand, and you get charged to make all this happen. Upstream of your immediate ISP, its more complicated.

Upstream of Comcast/AT&T, the Internet is a world of interconnected networks, big networks, with peering agreements. The Internet is not a network, it is a network of networks. We take it for granted that a user on AT&T U-verse can communicate with a user on Comcast, but this would not happen if those companies – and the other companies between them – did not have peering agreements to move bits to/from their major networks. (I just checked, there are 13 networks between my house and www comcast net).  Technically a count of routers, and there could be more than one inside each ISP along the way, but the point is that there are multiple layers of networking providers between any of us as home users and big content providers.  They work to keep this path short, but the layers must exist to have the internet scale to the levels of users that supports. 

All of this network of networks traffic ultimately funnels to a set of backbone networks which tie all the networks together and this backbone is funded by the US National Science Foundation and the data throughput is staggering.

In these big networks, network inter-connection is “free” so long as the number of bits in, is roughly equal to the number of bits out. This was one of the foundation principles defining the network of inter-connected networks that we call the internet. Everyone would like to connect to it because there is mutual benefit for all parties. This works so long as producers and consumers of bits are roughly random – it balances out. By contrast, consider what happens when the numbers are not 50/50. Here, if you SEND more than you receive, you pay the downstream party for bits you send that are more than bits you receive. 

Enter the modern world of video delivery.  There are multiple downstream networks from Netflix, yet the cost that Netflix pays to send bits gets REPEATED at every downstream network. Netflix only pays the first… The issue that some companies have very successfully figured out that they can get into the video delivery business over internet and OTHER companies, downstream of them will be forced to PAY to deliver their content. This is nothing short of a genius move! Netflix gets other companies to pay to deliver the content and the consumers pay Netflix.  SMART!  The ISPs in the middle though are not happy with this arrangement and so we get calls to repeal net neutrality! Bottom line, they want to be paid. They want to be paid by Netflix, but if they can’t get that, they’ll eat it out of the consumers with nickel, dime and dollar additions to bills. 

Caching comes into play. The Netflix video library seems infinite, but it’s a finite number. ISPs CAN cache the whole thing and if do, the ISP no longer has to pay the upstream internet providers for sending the hugely redundant bits. Netflix has caching servers they can install at ISPs to make this easier. ISPs almost surely also expect to be paid for hosting the caching servers in their data centers. Netflix likely expects them to do it for near free as just a caching service on fees they would otherwise have to pay the upstream network providers. 

Then this multiplies by the number of companies in the video delivery business. ISPs end up with 100s of caching servers scattered around the country for EVERY video delivery company multiplied by 100s of video companies equals real costs. The video delivery companies that are not cached are now pissed because they are being discriminated against. You cache Netflix, but not me!  NetFlix comes in high fidelity and my service is throttled! We want equality! It’s illegal! It’s anti-competitive.  Notice I’m not talking about outright discrimination here, this is just the way it plays out when attempts are made to make it efficient for the big players.

FCC or FTC
For the folks wanting to repeal Net Neutrality, the grand question of the present debate is not one of equality for everyone, the question is one of fair business practices and anti-trust. Are the actions by the ISP monopolistic? Are contracts with up-stream providers “fair”? The Net Neutrality proponents want to have non-filtered internet. Notice that both groups are not really wanting different things and though everyone appears to not get along, many are motivated by the same goals, keep the internet free!

With Net Neutrality now repealed, the internet does not revert to a place for ISPs to perform evil.  It goes back to a world of 2 years ago, where the FTC was in charge instead of the FCC.  Is the FTC more hands off?  Possibly – but anti-competitive rules apply even more strongly at FTC so the fears here of doom with the repeal do not seem well founded.  

Keeping the internet free
For me, the present course is good – but not for any of the above reasons. I am more interested in the 1994 CALEA law and whether it continues to exempt computers and networks from the CALEA requirements for real-time and remote, wiretap. Putting the internet in the dominion of the FCC makes it seem more like a “phone”. If the internet is regulated by the FTC rather than the FCC, this puts the internet further away from common carrier and IMO, is GREAT for civil liberties. We’ll have to wait and see if the courts and laws agree.

I wrote the foundation of this on facebook a few days ago and it received a good deal of interest.  Moved it here to make it a bit easier to read. Let me know your thoughts.

Joe Nord

Originally posted Dec 20, 2017

Comment from: joe Member

9th Circuit court of appeals today allows FCC case against AT&T for “unfair or deceptive acts or practices” regarding throttling of “unlimited” data plan to proceed.
http://cdn.ca9.uscourts.gov/datastore/opinions/2018/02/26/15-16585.pdf

02/26/18 @ 07:13 pm

Audio WAV file format

RIFF (.wav) file format has been around unchanged since the early 1990s and still in common use today.   This goes back to a time when CD audio formats were king and RIFF .wav follows the red book convention pretty closely, with the change that the number of channels, bits per sample and samples per second can vary as described in the file header.

As an example file, I have selected TADA.WAV from Windows 10, \windows\media\tada.wav.  This file is 285,228 bytes, the important part is in the first 44 bytes shown here.

* OFFSET +0       +4         +8       +C
00000000 52494646 245A0400 - 57415645 666D7420 *RIFF$Z..WAVEfmt *
00000010 10000000 01000200 - 44AC0000 10B10200 *........D¬...±..*
00000020 04001000 64617461 - 005A0400 00000000 *....data.Z......*

The first thing to notice is that .wav files and indeed all RIFF files always have “RIFF” as the first 4 characters.  The RIFF header is immediately followed by a WAVE header.  RIFF is “Resource Interface File Format” and in the formatting of file data, RIFF refers to everything as “chunks”.  A chunk is a collection of data that starts with a 4 character code identifier and is followed by a 32-bit length which is the amount of data in the chunk not including the chunk header.  These and all numbers in RIFF are stored little endian formatted.  Chunk sizes are usually even, but where it would be odd, the parser looks for the next chunk at the even address following.  That is, chunks are padded to an even size.  

The RIFF chunk is at the start of the file, and for TADA.wav, the size of the RIFF chunk is 0x245A0400, which is 0x00045a24 in big endian equals 285,220 in decimal.  The total file size equals the size of the RIFF chunk plus the 8 bytes that describe that it is a RIFF chunk and for TADA.wav, these add up and match.  

  • Filesize = 285,228 = 8 + 284,220

Next we dig into the sub-chunks of the RIFF chunk.  The RIFF chunk contains other chunks as its data.

Right away we see a “WAVE” chunk.  This is a .WAV file!  We knew that from the file extension, but now we really know it.

“WAVE” identifies the start of wave formatted data.  This always starts with a “fmt ” chunk immediately following and yes, that is a space at end.  Four character codes always have 4 characters even if the code is only 3 characters.  Four character codes do not include line feeds, just the ASCII text (not UNICODE).  The format chunk says the format of the data.  

The format chunk for this file disected is…

Field TypeDescriptionLittle endianBig endianMeaning
TAGTagIdentifier666D7420 “fmt “
ULONGFormatChunkSize100000000000001016
USHORTFormat01000001PCM
USHORTChannels02000002Stereo
ULONGSamplesPerSecond44AC00000000AC4444,100
ULONGAvgBytesPerSecond10B102000002B110176,400
USHORTBlockAlign040000044
USHORTBitsPerSample1000001016

PCM (format “1”) is the most common value for format.  There are a number of other formats defined primarily for compressed audio and these include Microsoft ADPCM as “2” and ITU G.711 a-law and u-law as 6 and 7.  It is entirely possible that a “valid” wav file can be processed by an audio player who will reply that it has been given an audio format that it does not understand, or that it has been given an audio format that no audio devices in the machine know how to process.  PCM is the most common audio format in .wav and PCM is “Pulse Code Modulation” which equals sound pressure levels measured by an analog to digital converter and stored into memory or file with no compression.

After the format chunk is USUALLY the “DATA” chunk.  For TADA.wav, this is true and the DATA chunk starts at offset 0x14 and has the following contents.

Little endianBig endianMeaning
64617461 “DATA”
005A040000045A00Data size: 285,184
datadata285,184 bytes of PCM data

Since the data chunk started at offset 0x14 (20 decimal) we can add 20 decimal plus the size of the data chunk header (8) plus 285,184 (data size) to find the start of the first chunk beyond the data.   Add those up, get 20 + 8 +  285,184 = 285220 which is the size of the file, so parsing is complete.

Everything inside the DATA chunk is PCM formatted data.  In this case, 16 bits per sample, stereo data at 44,100 samples per second.  This is CD audio format, stored in a WAV file.

Notice the sample is stereo, this means that “block alignment” is the count of bytes needed to store a full sample. Since this is stereo data, 16-bits per sample = 2 bytes per sample, times 2 channels, equals block alignment of 4 bytes.

The first PCM sample starts at offset 0x2C and it is 00000000. By convention, the left channel comes at the lower address which in this example is the 16 bits (2 bytes) at 0x2C equals 0x0000 and the right channel is 2 bytes later at 0x2E and it is also 0x0000. 

For PCM, 16 bit audio data is stored little endian (intel format). 

  • 16-bit PCM data is “signed” and zero represents silence (no sound pressure)
  • 8 bit PCM data is “unsigned” and the half way point at 0x80 represents silence (no sound pressure)

With a little bit of programming, you can plot this out and see sound waves, SIN waves even if you look at a file with perfect tone.

RIFF includes additional chunk definitions and if a parser encounters a chunk type it does not understand, it should skip it and continue at the next chunk identified by the chunk length.  Some wav file editors include provisions for adding copyright text chunks for example and these would be skipped by parsers during audio playback.  Looking at the TADA.wav shipped with Windows, this is not present.

Joe Nord

Originally posted Oct 27 2017

Comments

Comment from: Jake Visitor

Your text says 16-bit PCM data is “signed”. So, for example, looking at one channel of a 16-bit stereo PCM file, and lets say I had two bytes (little endian) that were “07 FF” hex. I would reverse these two to get “FF 07″ which would be 65527 in decimal. If it is “signed”, would not the max be +32768? I am confused as to how it is changed to signed.

11/25/17 @ 04:49 pm

Comment from: joe Member

> “FF 07″ which would be 65527
Close! FF 07 is -249. This is pretty close to zero on a 15 bit scale which means reasonably close to quiet. By 15-bit scale, I mean 15-bits of positive numbers and 15-bits of negative numbers. The top bit is “sign”.

> I am confused as to how it is changed to signed.
The sign bit (most significant bit) is a 1 (negative) so that requires a bit more work.
Answer: 2’s compliment the data to find out how negative it is. How far is the value away from 0.

1) Reverse all the bits. FF 07 becomes 00F8.
2) Add 1. 00F8 + 1 = 00F9
3) Convert to decimal. F = 15. 15*16 = 240. 240 + 9 = 249
4) Change the sign. -249

Windows instant select audio device

When your computer has more than 1 sound card, you may find it cumbersome to change audio devices.  The standard method requires going through the control panel or settings application , a process of multiple dialogs and multiple clicks.  There is a better way.

Run the control panel on Windows 7 or on Windows 10 via Settings application , there are multiple steps required to get to the audio device selection, all end up at the same control panel dialog.

Two ways to get to the control panel application that sets the default audio playback and record devices

  1. Start / Control Panel / Hardware and Sound / Sound (This is the Windows 7/8 method)
  2. Start / Settings (type “sound”) / Manage audio devices (This is the Windows 10 Universal app method)

Both of the above take you to EXACTLY the same control panel application.  

Bring up process explorer from sysinternals and it shows that the control panel task is really the rundll32.exe application with parameter to tell it to load and run the DLL which is the control panel sound device manager.

Command line:

“C:\WINDOWS\system32\rundll32.exe” C:\WINDOWS\system32\shell32.dll,Control_RunDLL C:\WINDOWS\System32\mmsys.cpl ,

Not sure why that last comma is there, but this is all the information we need to short circuit the long steps to get the sound device selection dialog.  Put that string into clipboard.  Run a command prompt, paste, enter, whalla, sound device selection dialog.  BUT – don’t want to use a command prompt to make this happen.  

Instead, create a “shortcut” on the desktop to point to rundll32.exe as the executable program, with parameter of everything after the program name above.

Here are steps in detail.  Point at anywhere on the desktop with no icon or program. 

  • Right mouse button, new, shortcut 

Click Browse: And select, this PC, C:, Windows\System32\rundll32.exe.

This will fill in the “Type the location of the item” textbox.

The name of the executable is in place, append on the line the additional parameters to tell it which DLL to load and execute. 

  • C:\WINDOWS\system32\shell32.dll,Control_RunDLL C:\WINDOWS\System32\mmsys.cpl

Notice I omitted the comma at the end.  When get done, the create shortcut dialog looks like below.

The last step is to give it a name.  I choose “Sound”.  That is, change rundll32.exe to “Sound”

Save and then double click the icon on desktop, and INSTANT control panel access to setting the default output device.

To make it prettier, set an icon.  Go back in (point at the icon, right mouse button, properties) and set the icon via “Change icon”.  Find one that looks about right and done…

Joe Nord

Originally posted Oct 27 2017

Comment from: Martin Visitor

Hi, thank you! Exactly what I was looking for! It’s perfect. 03/27/19 @ 07:08 pm

Skype for business audio with more than 1 audio device

My primary desktop computer has an integrated audio device on the system board and a USB attached Blue Yeti microphone.  Great mic, it makes me sound good on online meetings and that’s a win.  The Yeti in addition to being a high quality microphone also has a headphone jack underneath, which has a very high quality DAC and permits great music playback as well as the ability to hear yourself when you talk in online meetings.  In my view, that last part is kindof not needed, but it is there  and if you mute the microphone, you can listen to music without hearing yourself type. With two audio devices in the machine, Windows allows easy selection of default audio device for playback and default audio device for recording and as you may guess, my configuration is to use the Yeti microphone for recording and the system speakers for playback.   Now, add Skype for Business audio conferencing and you’ll find that when using the Yeti as microphone, Skype absolutely INSISTS on using the headphone audio connection on the Yeti as audio playback device – a device which normally has nothing plugged in.  The result is that when you join meetings, the audience can hear you, but you cannot hear them.

I struggled with this for a bit, using phones to dial into meetings.  I have since found the configuration screens to tell Skype to use the system speakers for conferencing.  

  • ·         You will sound better when using headphones!  

Yes, I probably would – if I had headphones.  I don’t!  I have a very high end microphone connected via USB and that isn’t headphones.  I do not want to use the audio output of the Yeti for speaker/headphone connection, I want to use the system speakers.

When on a call, these two configurations will change the audio output device.

In Skype options, you can set the default audio device for Skype using this screen.

Joe Nord

Originally posted Oct 27 2017

Comment from: Neil McDonnell Visitor

Thank you! I do a lot of recording and podcasts, and recently this issue impacted me deeply. Hours of search found you and two seconds later the problem was resolved. 🙂 Thanks! Neil McDonnell

01/21/18 @ 08:39 pm