When Windows 10 starts and users logon, the operating system by default will automatically restart all the applications that were running when the system was rebooted including launching them to reload the documents and revisit web browser URLs. Handy in concept this is not how I want the machine to behave. I would like a clean desktop, that looks the same everytime the machine boots. Thankfully, there’s a setting for this, not well advertised, but it does exit.
Search internet as I could, couldn’t find the answer, but after posting the question on twitter, the question was quickly solved by @JenMsft on the Windows shell team.
Answer: Settings, Accounts, Sign-in options, scroll down, “Use my sign-in info to automatically finish setting up my device and reopen my apps after and update or restart”. By default, this is enabled, click the button to turn if off and everything now works … “as it should”.
Once I knew what to search for, was able to find the online documentation. The option exists only in machines not part of a domain. Appears I’m more of the enterprise user even on home computer.
Warren Simondson @Caditc noted that that changing this setting enables or disables the following item in registry.
DWORD setting (boolean). 1 means “opt out” => do not restart applications at machine start.
As a once upon a time programmer of User Profile code, this has just become neat! Why the in wholly h@ll is WinLogon using a HKLM based setting to control the behavior of a per-user setting when the user registry must already be loaded before launching the applications? I note that this makes it really difficult to place this opt out into a default user profile to make this standard policy for the machine. If anyone has good ideas to why the machine registry was used for this per-user setting, do please add them in the comments below.
Final thought: the above prevents a machine wide default of opting out. There “could be” a separate setting to make that global and if I had to guess, it would be in the same WinLogon space with no SIDs. Have not had a chance to check it, but its a possibility.
Among the jobs of an operating system is to maintain the disk file system including the directory structure, file names and for each file, the file size and the pertitent date information for when the file was created, modified and potentially accessed. I have been editing some videos lately and after producing a MP4 using ffmpeg; I used PowerShell to set the file create and modified times to the date that the video was recorded. This makes it easier to sort and allows viewing software to display timelines, good things. MP4 files also have an internal date that is set by cameras to note the datetime of when the video was recorded. I was surprised to learn that Windows Explorer displays the MP4 date rather than the file date and this can be problematic if the MP4 file does not contain valid date data. This post describes the issue in detail and provides steps for adjusting all the dates to the same datetime.
Notice the use of term “datetime” rather than “date” and “time”. In Windows and its probably an ISO standard, computers store file dates and times as one field, which on Windows NTFS is a 64-bit signed count of 100ns periods since January 1, 1600. The point is that it can handle splitting a second up into 100 pieces intervals per second and 64-bits is a bit enough space that it can accurately store both date and time for a very long time. As users, we never see this and as programmers, we usually don’t need to worry about the detail of the field, its a 64-bit number that represents the file’s date and time in one go, “datetime”. Since it’s one field, comparisons of before, after and same become very easy and can be done in one operation and since its signed, you can “”subtract” datetimes to figure out how long it was between two times.
Windows stores 3 datetime fields for every file on disk
MP4 stores its own datetime of when the file was created – normally stored by a video camera assuming you remembered to set the clock!
On Windows NT systems, the Create date is not the date that the file was first created, it is the date that this specific file was created, so if you copy the file from one location to a new location, that new file will have a fresh Create date equal to “now” and if you then edit the image/video with a program, this will update the “modified” date and all concept of when the file was created will be lost. Image and video cameras work to solve this by storing the date that the picture was taken of video recorded directly inside the image/video file.
As a side note, the technique in PowerShell to adjust the fle create date back to what it is supposed to be is:
In my recent case, the files were MP4, videos of the kids in their younger days and I had imported these from analog camera to computer and after processing through a few different tools, the last step was using ffmpeg to convert to MP4. This worked great! Then I set the CreationTime to the historic datetime of the video being recorded and … though I was done.
View with Windows Explorer and … the DATE DID NOT TAKE!
Explorer is showing that the video’s datetime is 12/31/1969 7:00pm. That isn’t right!
Compare the GUI view to command prompt view. Never trust a GUI!
Command prompt has the “correct” datetime.
As the explorer GUI to show file details, and we have a match! It is listed as “Media Created” date.
And use EXIFTOOL to show the innerds of the data and we have a winner!
The dates inside the MP4 file are all zeros! Actually, this makes sense. I captured the video content from one tool, plumbed it through a few others, and out the other side, ffmpeg produced the MP4 file. By the time it got there, the only record of the datetime that the video was recorded was in the name of the file. The MP4 file has zeros as its internal record of when the file was created.
Sometimes things try to do too much
The job of an operating system is to maintain the file structure on disk. Esoteric concepts like what is in the files for video editing applications, is IMHO an application responsibility. BUT – Windows Explorer is trying to help out and is displaying the MP4 internal date rather than the Create, Modified, Accessed time from the file on disk. Explorer is trying a bit to be a video display application, something that really isn’t its job. Then again, it can also show thumbnails for images and … people like that. Is it something that SHOULD be in an operating system? The lines blur and we can all have a nice debate on whose job it is to display the video content and whose job it is to display the image/video content.
Back to reality – this is not showing the date that I want it to show and the solution is to either configure Explorer to show a different date field as date or to modify the MP4 files to have datetime information embedded. “B” is the better solution. I went looking on the internets and found this fine post on StackOverflow, link. Bingo! I am not the first person to have this problem. In the post, Edward Brey even provides Visual Basic source code to modify the MP4 datetimes to be any date you want. Awecome. Took a bit to compile that up and ran it and everything looked good, but I wasn’t done.
In running the VB code to adjust the MP4 datetime, the file modified date was also modified, so I ran my PowerShell bat file one more time to modify the file date times and … by magic, the MP4 datetime is now incorrect. For some files, off by 5 hours, for others, off by 4. A bit of study concluded that the 4 vs. 5 is daylight savings time or not of the date of the files set against the create time in the file. When PowerShell (.NET) is adjusting the CreateTime, it is also adjusting the MP4 internal create date! What? Why.
I did not ask PowerShell to adjust the datetimes that are in the file contents, only the file “date”. But, the embedded date in the MP4 file did change!
And we’re back to the job of operating systems vs. the job of applications, but this could be a very long diversion
And its worse. The MP4 datetime was “correct” per me in the before and after setting the datetime of the file create date, the MP4 datetime was now incorrect by a period of time equal to number of timezones away from GMT on the date that the file was recorded.
I studied this for about a day and concluded that MP4 internal datetimes are the datetime that the file was created/recorded/modified and that time shall always be ZULU! The VB code didn’t know if the time I gave it was local time or GMT/UTC/Zulu, so it went with what it had. When the file CreatedData was set from PowerShell, the .net runtime decided to help out and adjust the MP4 datetime to match the create date on the file itself. Okay, problem now understood and dates all adjusted. What is “odd” is that if PowerShell and .NET adjust the date of the MP4 file when the internal date is concluded “wrong”, why doesn’t it also do that when the internal datetime is zeros? Consistency here would be a win.
Please add your comments below. If you want the compiled version of the program to adjust the MP4 internal dates, drop a line and I’ll send your way. email to joe at this domain.
The DOS world’s need for memory grew and the 64KB available to .COM executables was no longer adequate. The NE “new executable” executable file format was invented and uses the .EXE file extension rather than .COM. The first 2 bytes of these files includes a tag at the front to identify the format and if you guessed that this was “NE” to denote “new executable”, you’d be incorrect. The first 2 bytes of every .EXE file are “MZ” famously because the name of the programmer at Microsoft who wrote the code was “Mark ZbikoNwski”.
The EXE file format allowed multiple segments to be defined, and included ability for separate compilation of portions of the program, and SDKs. That is, different parts of the program could be compiled into .OBJ files, and then a LINK step is performed to assemble the resulting EXE file. This enabled many good things like separate compilation of varied portions of programs and the ability to purchase libraries of code from other developers without them having to provide source code. The NE format also permitted programs to be “large”, occupying space to all the memory available on the DOS computers.
I intended to write a detailed description of this evolution of file formats here, but there’s no need, it’s been well done in detail by others and I provide here a links here
Cutting to the meat of it, the NE (MZ) format executable has these portions
Which is really
The Header includes information for allocating a heap and a stack. One grows up, one grows down, when they collide, the application is out of memory. Notice that this is still DOS so it isn’t like the operating system is going to do anything when the application collides it’s memory. Still, the executable format is starting to grow into a real concept of an operating system, with a loader.
The Code and Relocation list can use a bit more description as there can be multiple code regions, each limited to 64-KB (size of a SEGMENT).
The executable is defined in segments, each of which is loaded into memory at a paragraph boundary (16 byte boundaries). The SEGMENT of that paragraph of memory can be addressed using the segment registers as 16:16 segment:offset addressing converts to physical address by shifting the segment left 4 and adding the offset. At this time in the life of Intel processors, there was no such thing as virtual addresses. The 8086 CPU is a pretty straight forward machine. Segment:Offset converts staight to physical and when the CPU addressed it, it actually went all the way to the ISA bus where memory would respond.
After loading each code segment into RAM, the DOS loader applies the fixup records so that code calling between segments can call the 16:16 addresses where the program segment is actually loaded at runtime. There is NO provision for DLL’s or dynamic linking.
This file format was the primary format for DOS computers through the long life of the DOS operating system and it is still with us today. The modern PE file format includes a NE/MZ format executable as a “DOS Stub” at the front. This is primarily so that programs intended for Windows 3.11 or OS/2 could display a message along the lines of “this program is intended to execute under Windows” and then the stub terminates. Creative programmers can use the DOS stub to run a DOS version of a program when on DOS and a Windows or OS/2 version of program when on those operating systems. We’re on a journey here; PC operating systems are starting to look like “real computers”. The next post will take us into modern times of about 1990.
In the beginning, the DOS .COM file format was the format for executables of size less than 64KB and let’s face it, who would really need more. I’m headed down a path to discuss PE file format executables and no good discussion is right without a foundation. In the beginning, there was DOS; life was simple and there was nothing between you and the computer. This post describes the early executable .COM file format showing code, data, everything that you need for small executables!
The .COM file format was simple. The whole program had to be less that 64KB in size. Technically less than 64KB minus 0x100 bytes, but stick with me. To run a program, DOS allocated memory for the size of the file, loaded the entire file contents into memory and … branched to it via a “call” instruction. Simple stuff, absolutely no runtime fix ups, no DLL records, no DLLs!, heck no linked operating systems APIs that you could call! Need something in your program? You should make it part of your 64KB because that’s all there is!
Going from memory today because a DOS computer is not readilly available, the offset (IP) at the start of execution was 100 hex bytes. Notice I didn’t say into the starting segment, there was only ONE segment. Code, data, everything in one place with absolutely no distinction between anything. The verbatim bytes from the .COM file are placed into memory at CS:100, and DOS calls it. Did the program work or not? DOS didn’t care, it loaded it into memory and … “called” it. Everything else was bonus time. If the machine managed to not become a colored checkerboard mess during the life of the program, the program could eventually “return” and execution would go back to DOS who would put you back at the COMMAND.COM prompt.
Though I have no DOS computer readily available, I do have old programs. The smallest and easist for show and tell is one I wrote with some friends in 1988 to efficiently reboot a computer skipping the memory test during boot. The program is … 16 bytes and performs the same activity as Ctrl-Alt-Delete including setting a flag to tell the BIOS to skip the memory test on boot. Interestingly, we wrote this using debug.com and the “a’ command so there is no source and … there never was any source. To show what does, need a disassembler that understands 16-bit intel little endian assembly language and DEBUG.COM from DOS would do it, but again, no DOS handy. Found a suitable disassembler online at shell-storm, here, but first need hex. Using my own HexDump utility, get these hex bytes
Feed that into the disassembler above and get the below. I added the comments.
0x0000: mov ax, 0x40 ; Establish vision to the BIOS data area
0x0003: mov ds, ax ; Data segment is 40 equals physical address 400
0x0005: mov word ptr [0x72], 0x1234 ; set 40:72 to 0x1234 (BIOS Flag for fast reboot)
0x000b: ljmp 0xf000:0xe05b ; Branch to BIOS "reboot" code
When run that, these 4 instructions are “it”. DOS loads the code into memory at offset 100h, and branches (calls) to the first byte. That’s it.
These were simple times and it was downright impressive what could be accomplished in 64KB, or in this case, in 16 bytes. This program reboots the machine, so there is no need for the code above to include return logic, but it could be done with a “ret” instruction or you could issue the DOS system call to terminate with a return code.
Calling runtime services
These early systems didn’t have library linkage or DLL fixups – there were no .LIBS or .DLLS and “that’s the way we liked it”. If you wanted to call the operating system to do something useful like display a string or read a file, you loaded up some registers and kicked off an “int 21h” to call DOS. YES – software issuing equivilant of hardware interrupts to call the operating system! Its DOS, do what you want! The DOS programming references told you what to put in registers before the call, issue the DOS interrupt and things happen. Get the parameters wrong, the machine could be hosed – but hey! It was your machine and there was very little that couldn’t be cured with a Big-Red-Switch or a Ctrl-Alt-Delete or as this program demonstrates, having a batch file call a program named “boot.com” to accomplish the same end.
Compatibilty with CPM
With a joke at the start of this post that that nobody would ever need more than 64KB, the real story of .COM in DOS is that it would provide application backward compatibility with the 16-bit (64KB is all you get) CPM operating system. DOS was written to the abilities of the 8088/8086 Intel CPUs and these had an segment:offset based addressing limit of 20 bits equals 1MB. The segment was shifted left 4 and offset added to get physical address. CPM executable format was 64KB flat memory with no segments. If you were designing DOS for an 8086/8088 and wanted backward compatibiltiy with the existing apps at the time, this made it possible. Load an application into memory, pre-set the CS, DS, ES and SS segment registers to point to the loaded SEGMENT (paragraph boundary memory) and “call” the loaded program and it would run the whole time being blissfully unaware that the machine supported segment:offset addressing. Next, we move forward to the New Executables!
Had an interesting one today when outlook 2016 prompted me to permit a website to configurre email settings for a user that isn’t me. In this case, it was for a user that doesn’t work at my company anymore and this led me to ponder if someone in IT was trying to connect me to the no longer here user’s email inbox. Answer, no.
There are many descriptions of this dialog on the web but they all seem to end with telling outlook to never do autodiscover; I didn’t want that, instead wanted to solve the issue and leave configuration items untouched. Try all the easy things to get this dialog to go away, it continued to be shown on each fresh launch of Microsoft outlook, but the mystery is now solved.
The firstname.lastname@example.org email address presented by the dialog is 1) valid, 2) is not me, and 3) is for a user that no longer works at my company. The email inbox link up didn’t make sense so I did some more digging and concluded that it was calendars. Ctrl-2 to see calendars and view “shared calendars” and sure enough I have a non-checked item for this user. I am not viewing his calendar, but I did once upon a time and outlook seems to remember that.
Clear that user from the list of shared calendars, close outlook, reopen and … presto, problem solved.
Hopefully this helps someone else. Enjoy.
Originally posted Feb 07, 2018
Comment from: IanVisitor
Thanks, it seems to have done the trick. I had a work’s calendar in shared. I was reluctant to press okay as I’ve made the mistake of accepting similar before. I use my own laptop to read work’s emails and when pressing something like this in the past, meant the work’s IT took over the settings for my laptop. So of course I was reluctant to do the same.
07/23/18 @ 02:24 pm
Comment from: IanVisitor
spoke too soon…back again…sorry about that. 07/24/18 @ 10:19 am
Net Neutrality is an emotional subject. It sounds like it is about equality and liberty for all, but that’s a simplification. As a foundation thought, this is not about “equality”, it is about which federal agency regulates the Internet, the FCC or the FTC? Read this post and I hope it will bring some clarity to the discussion, and help the folks on both sides of the polar topic realize that they have more in common than they thought – keep the internet free!
The view of Net Neutrality proponents is that internet service providers (e.g. Comcast, AT&T) are in the business of delivering internet bits to / from users, and any filtering of these bits, or preference of some bits to other bits goes against the god given foundation of the US of A. This is ‘Merica!. Don’t mess with my internet! Why then would anyone be interested in repealing Net Neutrality?
Some background on pricing
As home users, we pay a monthly fee and stream all we want. The ISPs average usage out over time and make sure they have the ability to deliver the bandwidth that their customers demand, and you get charged to make all this happen. Upstream of your immediate ISP, its more complicated.
Upstream of Comcast/AT&T, the Internet is a world of interconnected networks, big networks, with peering agreements. The Internet is not a network, it is a network of networks. We take it for granted that a user on AT&T U-verse can communicate with a user on Comcast, but this would not happen if those companies – and the other companies between them – did not have peering agreements to move bits to/from their major networks. (I just checked, there are 13 networks between my house and www comcast net). Technically a count of routers, and there could be more than one inside each ISP along the way, but the point is that there are multiple layers of networking providers between any of us as home users and big content providers. They work to keep this path short, but the layers must exist to have the internet scale to the levels of users that supports.
All of this network of networks traffic ultimately funnels to a set of backbone networks which tie all the networks together and this backbone is funded by the US National Science Foundation and the data throughput is staggering.
In these big networks, network inter-connection is “free” so long as the number of bits in, is roughly equal to the number of bits out. This was one of the foundation principles defining the network of inter-connected networks that we call the internet. Everyone would like to connect to it because there is mutual benefit for all parties. This works so long as producers and consumers of bits are roughly random – it balances out. By contrast, consider what happens when the numbers are not 50/50. Here, if you SEND more than you receive, you pay the downstream party for bits you send that are more than bits you receive.
Enter the modern world of video delivery. There are multiple downstream networks from Netflix, yet the cost that Netflix pays to send bits gets REPEATED at every downstream network. Netflix only pays the first… The issue that some companies have very successfully figured out that they can get into the video delivery business over internet and OTHER companies, downstream of them will be forced to PAY to deliver their content. This is nothing short of a genius move! Netflix gets other companies to pay to deliver the content and the consumers pay Netflix. SMART! The ISPs in the middle though are not happy with this arrangement and so we get calls to repeal net neutrality! Bottom line, they want to be paid. They want to be paid by Netflix, but if they can’t get that, they’ll eat it out of the consumers with nickel, dime and dollar additions to bills.
Caching comes into play. The Netflix video library seems infinite, but it’s a finite number. ISPs CAN cache the whole thing and if do, the ISP no longer has to pay the upstream internet providers for sending the hugely redundant bits. Netflix has caching servers they can install at ISPs to make this easier. ISPs almost surely also expect to be paid for hosting the caching servers in their data centers. Netflix likely expects them to do it for near free as just a caching service on fees they would otherwise have to pay the upstream network providers.
Then this multiplies by the number of companies in the video delivery business. ISPs end up with 100s of caching servers scattered around the country for EVERY video delivery company multiplied by 100s of video companies equals real costs. The video delivery companies that are not cached are now pissed because they are being discriminated against. You cache Netflix, but not me! NetFlix comes in high fidelity and my service is throttled! We want equality! It’s illegal! It’s anti-competitive. Notice I’m not talking about outright discrimination here, this is just the way it plays out when attempts are made to make it efficient for the big players.
FCC or FTC For the folks wanting to repeal Net Neutrality, the grand question of the present debate is not one of equality for everyone, the question is one of fair business practices and anti-trust. Are the actions by the ISP monopolistic? Are contracts with up-stream providers “fair”? The Net Neutrality proponents want to have non-filtered internet. Notice that both groups are not really wanting different things and though everyone appears to not get along, many are motivated by the same goals, keep the internet free!
With Net Neutrality now repealed, the internet does not revert to a place for ISPs to perform evil. It goes back to a world of 2 years ago, where the FTC was in charge instead of the FCC. Is the FTC more hands off? Possibly – but anti-competitive rules apply even more strongly at FTC so the fears here of doom with the repeal do not seem well founded.
Keeping the internet free For me, the present course is good – but not for any of the above reasons. I am more interested in the 1994 CALEA law and whether it continues to exempt computers and networks from the CALEA requirements for real-time and remote, wiretap. Putting the internet in the dominion of the FCC makes it seem more like a “phone”. If the internet is regulated by the FTC rather than the FCC, this puts the internet further away from common carrier and IMO, is GREAT for civil liberties. We’ll have to wait and see if the courts and laws agree.
I wrote the foundation of this on facebook a few days ago and it received a good deal of interest. Moved it here to make it a bit easier to read. Let me know your thoughts.
RIFF (.wav) file format has been around unchanged since the early 1990s and still in common use today. This goes back to a time when CD audio formats were king and RIFF .wav follows the red book convention pretty closely, with the change that the number of channels, bits per sample and samples per second can vary as described in the file header.
As an example file, I have selected TADA.WAV from Windows 10, \windows\media\tada.wav. This file is 285,228 bytes, the important part is in the first 44 bytes shown here.
The first thing to notice is that .wav files and indeed all RIFF files always have “RIFF” as the first 4 characters. The RIFF header is immediately followed by a WAVE header. RIFF is “Resource Interface File Format” and in the formatting of file data, RIFF refers to everything as “chunks”. A chunk is a collection of data that starts with a 4 character code identifier and is followed by a 32-bit length which is the amount of data in the chunk not including the chunk header. These and all numbers in RIFF are stored little endian formatted. Chunk sizes are usually even, but where it would be odd, the parser looks for the next chunk at the even address following. That is, chunks are padded to an even size.
The RIFF chunk is at the start of the file, and for TADA.wav, the size of the RIFF chunk is 0x245A0400, which is 0x00045a24 in big endian equals 285,220 in decimal. The total file size equals the size of the RIFF chunk plus the 8 bytes that describe that it is a RIFF chunk and for TADA.wav, these add up and match.
Filesize = 285,228 = 8 + 284,220
Next we dig into the sub-chunks of the RIFF chunk. The RIFF chunk contains other chunks as its data.
Right away we see a “WAVE” chunk. This is a .WAV file! We knew that from the file extension, but now we really know it.
“WAVE” identifies the start of wave formatted data. This always starts with a “fmt ” chunk immediately following and yes, that is a space at end. Four character codes always have 4 characters even if the code is only 3 characters. Four character codes do not include line feeds, just the ASCII text (not UNICODE). The format chunk says the format of the data.
The format chunk for this file disected is…
PCM (format “1”) is the most common value for format. There are a number of other formats defined primarily for compressed audio and these include Microsoft ADPCM as “2” and ITU G.711 a-law and u-law as 6 and 7. It is entirely possible that a “valid” wav file can be processed by an audio player who will reply that it has been given an audio format that it does not understand, or that it has been given an audio format that no audio devices in the machine know how to process. PCM is the most common audio format in .wav and PCM is “Pulse Code Modulation” which equals sound pressure levels measured by an analog to digital converter and stored into memory or file with no compression.
After the format chunk is USUALLY the “DATA” chunk. For TADA.wav, this is true and the DATA chunk starts at offset 0x14 and has the following contents.
Data size: 285,184
285,184 bytes of PCM data
Since the data chunk started at offset 0x14 (20 decimal) we can add 20 decimal plus the size of the data chunk header (8) plus 285,184 (data size) to find the start of the first chunk beyond the data. Add those up, get 20 + 8 + 285,184 = 285220 which is the size of the file, so parsing is complete.
Everything inside the DATA chunk is PCM formatted data. In this case, 16 bits per sample, stereo data at 44,100 samples per second. This is CD audio format, stored in a WAV file.
Notice the sample is stereo, this means that “block alignment” is the count of bytes needed to store a full sample. Since this is stereo data, 16-bits per sample = 2 bytes per sample, times 2 channels, equals block alignment of 4 bytes.
The first PCM sample starts at offset 0x2C and it is 00000000. By convention, the left channel comes at the lower address which in this example is the 16 bits (2 bytes) at 0x2C equals 0x0000 and the right channel is 2 bytes later at 0x2E and it is also 0x0000.
For PCM, 16 bit audio data is stored little endian (intel format).
16-bit PCM data is “signed” and zero represents silence (no sound pressure)
8 bit PCM data is “unsigned” and the half way point at 0x80 represents silence (no sound pressure)
With a little bit of programming, you can plot this out and see sound waves, SIN waves even if you look at a file with perfect tone.
RIFF includes additional chunk definitions and if a parser encounters a chunk type it does not understand, it should skip it and continue at the next chunk identified by the chunk length. Some wav file editors include provisions for adding copyright text chunks for example and these would be skipped by parsers during audio playback. Looking at the TADA.wav shipped with Windows, this is not present.
Originally posted Oct 27 2017
Comment from: JakeVisitor
Your text says 16-bit PCM data is “signed”. So, for example, looking at one channel of a 16-bit stereo PCM file, and lets say I had two bytes (little endian) that were “07 FF” hex. I would reverse these two to get “FF 07″ which would be 65527 in decimal. If it is “signed”, would not the max be +32768? I am confused as to how it is changed to signed.
11/25/17 @ 04:49 pm
Comment from: joeMember
> “FF 07″ which would be 65527 Close! FF 07 is -249. This is pretty close to zero on a 15 bit scale which means reasonably close to quiet. By 15-bit scale, I mean 15-bits of positive numbers and 15-bits of negative numbers. The top bit is “sign”.
> I am confused as to how it is changed to signed. The sign bit (most significant bit) is a 1 (negative) so that requires a bit more work. Answer: 2’s compliment the data to find out how negative it is. How far is the value away from 0.
1) Reverse all the bits. FF 07 becomes 00F8. 2) Add 1. 00F8 + 1 = 00F9 3) Convert to decimal. F = 15. 15*16 = 240. 240 + 9 = 249 4) Change the sign. -249
When your computer has more than 1 sound card, you may find it cumbersome to change audio devices. The standard method requires going through the control panel or settings application , a process of multiple dialogs and multiple clicks. There is a better way.
Run the control panel on Windows 7 or on Windows 10 via Settings application , there are multiple steps required to get to the audio device selection, all end up at the same control panel dialog.
Two ways to get to the control panel application that sets the default audio playback and record devices
Start / Control Panel / Hardware and Sound / Sound (This is the Windows 7/8 method)
Start / Settings (type “sound”) / Manage audio devices (This is the Windows 10 Universal app method)
Both of the above take you to EXACTLY the same control panel application.
Bring up process explorer from sysinternals and it shows that the control panel task is really the rundll32.exe application with parameter to tell it to load and run the DLL which is the control panel sound device manager.
Not sure why that last comma is there, but this is all the information we need to short circuit the long steps to get the sound device selection dialog. Put that string into clipboard. Run a command prompt, paste, enter, whalla, sound device selection dialog. BUT – don’t want to use a command prompt to make this happen.
Instead, create a “shortcut” on the desktop to point to rundll32.exe as the executable program, with parameter of everything after the program name above.
Here are steps in detail. Point at anywhere on the desktop with no icon or program.
Right mouse button, new, shortcut
Click Browse: And select, this PC, C:, Windows\System32\rundll32.exe.
This will fill in the “Type the location of the item” textbox.
The name of the executable is in place, append on the line the additional parameters to tell it which DLL to load and execute.
My primary desktop computer has an integrated audio device on the system board and a USB attached Blue Yeti microphone. Great mic, it makes me sound good on online meetings and that’s a win. The Yeti in addition to being a high quality microphone also has a headphone jack underneath, which has a very high quality DAC and permits great music playback as well as the ability to hear yourself when you talk in online meetings. In my view, that last part is kindof not needed, but it is there and if you mute the microphone, you can listen to music without hearing yourself type. With two audio devices in the machine, Windows allows easy selection of default audio device for playback and default audio device for recording and as you may guess, my configuration is to use the Yeti microphone for recording and the system speakers for playback. Now, add Skype for Business audio conferencing and you’ll find that when using the Yeti as microphone, Skype absolutely INSISTS on using the headphone audio connection on the Yeti as audio playback device – a device which normally has nothing plugged in. The result is that when you join meetings, the audience can hear you, but you cannot hear them.
I struggled with this for a bit, using phones to dial into meetings. I have since found the configuration screens to tell Skype to use the system speakers for conferencing.
· You will sound better when using headphones!
Yes, I probably would – if I had headphones. I don’t! I have a very high end microphone connected via USB and that isn’t headphones. I do not want to use the audio output of the Yeti for speaker/headphone connection, I want to use the system speakers.
When on a call, these two configurations will change the audio output device.
In Skype options, you can set the default audio device for Skype using this screen.
Originally posted Oct 27 2017
Comment from: Neil McDonnellVisitor
Thank you! I do a lot of recording and podcasts, and recently this issue impacted me deeply. Hours of search found you and two seconds later the problem was resolved. 🙂 Thanks! Neil McDonnell
Hurricane approaching, you live on the water with a canal behind the house, does the boat go in the water or stay on the lift? With the experience of hurricane Irma just completed, I can answer this question: Put the boat in the water.
A better answer is “get the boat out of the water, onto a trailer and driving away”. That isn’t possible in all cases, especially for larger boats and I will add that if you think you did well and found a trailer before the storm, you will come home to find the canal is already closed off with neighbors tying off a couple days before storm arrival.
Here in Lighthouse Point (Broward County, Fort Lauderdale, FL), we just experienced hurricane Irma. A pretty good wind here, nothing like the keys, but strong winds at hurricane strength for 6 or more hours. We are a couple thousand feet from ocean, but the barrier island of Hillsboro Mile protects us from the ocean. The Hillsboro inlet is less than 1 mile away. I have two boats of personal study and both made it through the storm with no damage, one in the canal and a smaller boat on side of house on trailer.
First boat, 1995 Mako 22.1-B center console with T-Top which spends most of its time on an “L” lift rated for more than twice its weight.
For smaller hurricanes, I have left boat on lift successfully. Tied boat to the lift and tied lines fore and aft to pilings far away to keep the boat from swaying and potentially twisting the lift in directions where it is not designed to take high stress. This worked, but Irma looked more like a “3” than a “1” – this time I put the boat in the water and it was a good call.
Side note is that, oh I do WISH this lift were a 4 post. L-lift is what I have and as will show later in this post, they don’t fare as well as 4 post. The lateral sway fore and aft breaks “L” lifts and I’ll show a photo of another boat in the city that had this problem with Irma. Look as I could, I could find no example of a boat on 4 post lift failing in this storm – least here where we probably experienced cat 2..3 level damage.
Ropes, line and rode
You’re going to need lots of rope. Find it in the garage, find it in the anchor well, you will never find it at the store unless you thought about this months ahead. Liberate the anchor lines of your primary anchor and all the spares. Turns out the chains are useful. The boat needs to be in a spider pattern in the middle of canal and this will require all of your lines; MORE is better.
Replaced the bow eye and rear tie down cleats
About a year ago, the bow eye on this 20 year old boat was missing. When did it go away? Bottom line, it was “missing” which means it failed and wasn’t as strong as one might think. I had to replace the bow eye and when replaced that on the bow, also replaced the other 2 in the stern. Inspection says that the stainless steel metal had corroded from the inside through, all 3 were weak. The one on front was missing and one of the two from the stern broke during removal. That isn’t supposed to happen! Good news for this storm, I had recently replaced all 3 of the U bolts and all 3 are again strong – I used them as primary attach points for lines from dock.
Spread the load
While the towing U bolts are strong, there is not enough area there to tie things to. A solution is to string ropes through the U bolts and then bring them up to the docking cleats on the top of boat. Instead of that, I built three (3), 3 to 4 foot long ropes out of very heavy 3/4 inch line to attach to the towing eyes. Galvanized shackles on one end attached to the boat and on the other end a large braided polypropylene eye to attach lines to and through. This also has the advantage that everywhere something is connected to the boat, it is underwater during storm, which should keep it cool and help lines survive periods of high load. It has disadvantage that if the shackle or U bolt fails, the lines will go free with no top side cleat to try to hold on.
On the front, the trailer eye is hard to get to, so I used a large hook with spring lock and here, used metal eye on the water end – connect lines using shackles, as exist on anchor lines – anchor removed. On some, used anchor lines on shore with chain in a loop around piling – that worked very well.
The canal faces east – there are 2 lines to shore on both the front and the rear of the boat and for bonus points, a pair of north / south (side) lines to keep the boat from getting too close to the shore. If all goes as planned, these side lines never take a load. Also, with Irma, weather forecast said strongest winds would be from the south, so added an extra set of lines from the boat right rear U bolt to a separate piling on the shore. Both lines would have to fail to send the boat wondering.
When get done, the boat looks like this in the canal. Most of the front line attachments are not visible – they are all underwater.
A nice photo, observe it also has a different boat on a lift to the left and a jet ski on a floating dock on the far side. BOTH also survived the storm though the floating dock was doing a backward wheelie at highest part of storm tide with its nose held under the seawall.
Most of the lines were sent from boat to shore around piling and then back to the boat. This made it possible to adjust line length from the boat to all shore attach points. I note that it also means that when you get done adjusting all the lines you have to SWIM to shore! I have seen people make mistake of trying to keep the boat off the dock, but close enough to make the jump. No! put the boat in the middle and swim in.
To do better, each line from shore to boat should be distinct line so that one failure would not allow the doubled line to unwind. It didn’t matter, everything held. Also, advice from many says that the lines need to be tied DOWN to the dock so they do not get pulled above the piling. Used small ropes and bowline knots to keep the lines near bottom of the piling, allowing the lines to slide but keeping them held down on the pilings. This worked out to be extra prep with no return because the water never got high enough for it to matter.
As predicted on the news, the water did get high though. Not like a direct hit, keys style high, but higher than I have ever seen it before at this location. It got about 6 inches above the level in the photo below. The boat was not troubled and found the windy day to be similar to a pretty ordinary day in the ocean. There was lots of mess to clean up, but no damage.
My immediate neighbors didn’t have any issues. Boats on lifts, boats in water, all fine. Further down the canal, there was damage. Below is a picture of a large sailboat that was tied off the dock, but not far enough to allow the lines to stretch. Both boat and dock suffered damage – a serious eroding of piling can be seen in this picture.
A few canals away, was an example of a boat on an L-lift where the lift failed. It looks like lateral movement on the “L” lift caused 1/2 of the lift to fail, tossing the boat into the water during the storm. Boat survived, with damage. In this case, the boat from L lift was at end of canal and tying up “to the street” where I stood taking this picture would have been pretty easy. Majority of wind would have blown “away” making for a pretty good case for “put it in the water”. To note though, the trees on shore were blown down so it would have taken some work to find a good place to tie on.
Leaving boats on floating lifts was also a losing proposition. When the water rises higher than the floating lift can ascend, the boat takes a dip. Answer: Put the small boat on a trailer or put the boat in the water. Observe that the floating dock rose, damaged the dock, then the water receded, with the floating lift stuck to the dock, putting the back end of the boat into the water.
I have a Boston Whaler very similar to the above but a bit smaller, that one looks like 17, mine 15. Kept on trailer on side of house, tied to 3 concrete deadmen installed about 10 years ago with chains that just stay there waiting for the rare storm. The anchors here go down into ground about 4 feet with a few bags of concrete each. In addition to tying the trailer to the ground, we tied the boat to the trailer and filled the boat anchor well with water to make it heavy. The boat weathered the storm with no issues. The fence in front of it blew down. I tied the fence to the boat during storm to keep it from getting loose.
No matter what happened to my little boats, it could be worse. Less than a mile from here is the Hillsboro Inlet and there are some beautiful homes in that stretch of real estate including this one, just a couple houses from the inlet.
This is/was a beautiful monstrous yacht, which did not survive. I hear the back end came loose during the storm and banged against pilings, and she sank. That is a bad day. On the front not visible in this picture is anchor chain tied up into the yard around a very large large silver palm tree, that held. The back end just couldn’t have a big enough anchor? Big sail, hard to win?
With the experience of hurricane Irma, I observe a few things
Boats in canals do better than boats on lifts in strong storms
Boats on trailers tied to something heavy can survive lots of weather
4 post lifts do better than L lifts
Floating boat docks are not a really good place to be
Ideally, I’d invest in a trailer and put the boat on the trailer for a storm. Would then need a place to store the trailer and would also have to get out “early” to avoid the nest of boats strung across the canal. Trailer is the best answer – and a truck to tow it away from storm. Baring that, for category 1, the L lift with bracing will be fine. For category 3, my ship plan says put the boat in the water. For category 4 like the Florida Keys just experienced 100 miles south of here, well you’re screwed either way and I’m not sure anything would help.
Comment from: Harry AlverioVisitor
Hi In Puerto Rico , a lot of boats survived in the cannals during hurricane Maria . They were tied the same way. Those left in the marina got hurt the most , hitting pillings and other boats. I trully belive that if you cannot take the boat out of the water try to move it to channels or mangrove protected areas. This is my 2 cents!
03/28/18 @ 12:26 am
Comment from: joeMember
Thank you for the comment Harry. With Maria, Puerto Rico went through some real mess and I wish you safety and happiness.
05/24/18 @ 01:56 pm
Comment from: ShawnVisitor
I am moving to Florida shortly. I will also be living on a canal way. This was a very helpful read. I have been worried sick about what to do with my boat. No one really talks about boats on canal ways during hurricanes.
Just curious, if everyone is tying up all up and down the canal, what happens when one boat isn’t secured properly and starts making its way up the canal and hitting other boats? do you worry about that?
sorry for commenting so much later than your post. just finally came across this!
04/18/19 @ 03:52 pm
Comment from: joeMember
Hi Shawn, welcome to the neighborhood. > Just curious, if everyone is tying up all up and down the canal, what happens when one boat isn’t secured > properly and starts making its way up the canal and hitting other boats? do you worry about that? Everyone worries about that, and we worry about it before the storm. If a boat is in the canal and for some odd reason is not making its way to the middle of like all the other boats, look for the neighbors to knock on the owners door and offer encouragement and assistance. Common also for people to take dinghies up/down canal before the storm and inspect the rope and knots of everyone up-wind.