Wednesday, November 27, 2013

Why Google Needs to Update Their Two-Factor Authenticator/TOTP System


So those of us that are security and hacker conscientious use two-factor authentication for our online and work accounts whenever possible. 

Basically two-factor authentication is the concept that your User Name or ID is something you are (which you have to prove), your Password is something that you know (the first factor of proving you are who you say you are), and something that you Have (the second factor of proof).  So for example, your cell phone that can receive a text-message or an e-mail generated by the system with a unique one-time code when you try to log in with a user name and password is a two-factor system. 

Another example is a system where the server and the user know a shared secret (other than the password).  This secret was established during user enrollment, and is used, along with a known, shared counter, or some kind of time-code, to produce a PIN that is only usable once, and thus changes with each log-in attempt. 

Google uses this second system.  Twitter and Yahoo use the first type of system.  Microsoft offers a two-factor system but I haven't played with it yet.


Google uses the open standard TOTP (RFC 6238), which is nice because everyone knows exactly what it is, how it works, and how secure it is because everything is transparent like you would want in a Crypto based system.  They developed their own app which is available for all major mobile devices (but no desktop app; author’s plug: I wrote one myself in .NET, it’s available HERE at CodePlex).  This produces 6-digit codes every 30 seconds, according to the default assumptions in the standard.

When you sign up for the two-factor system, they provide a barcode that is basically a URI string, defining the properties of the account, which you can scan with your phone (or a simple Base32 string you can type into a device that has the app but no camera).  They also provide 10, 8-digit “emergency” codes that are one-time use in case you need to get into your account but have lost your device.

What they did, though, when they set up their system is in the app and the registration process, they ONLY use the defaults (each code has a 30 second “period”, every code is 6 digits long, and the SHA1 hash is the core algorithm, with an 80-bit secret key; this key is the only piece of actual information they provide as it is the only thing that changes between users).  The fact that they only implement these defaults means that the Google Authenticator app is basically ONLY usable for Google sign-in, unless you happen to have a system that uses the EXACT same defaults.  I’ve tried using different URI strings on the app, and the app ignores them all, and it even truncates any secret keys you provide that are longer than 80 bits.  Frustrating.

(Note the program also supports HOTP, defined in RFC 4226, but time-based codes are more common in the "wild", and I'm not discussing the HOTP side of the app in this article)


First, they can update their app to fully implement the standard URI format and allow for different “periods”, digit-lengths, hash algorithms (such as anything in the SHA family including the new SHA-3 standard when it finally gets codified by NIST, or other widely popular algorithms, like WHIRLPOOL, RipeMD160, etc.), and for any length key between 80- and 512-bit.  This would allow a larger number of users and system administrators to use the Google app as the standard for user-device acceptance.  The Bring Your Own Device (BYOD) movement is already strong in the marketplace, so why not take advantage of it.  RSA doesn’t have to be the only leader in this field.

Second, they could update their system to use stronger secret keys (>80 bits), and a better algorithm (like SHA2-384) as their own, new defaults.  Since they already have to have a DB or a table somewhere with these 80-bit keys for their users, they could add the fields necessary to store the additional system information (or simply widen the fields for the longer keys) and a flag for which algorithm the user is enrolled with, such as the v1 algorithm (SHA1) or v2 algorithm (SHA2-something).  This wouldn’t increase their data usage by any unmanageable amount and still allow their legacy two-factor users to log-in until they updated to the new order.

In the meantime, Google has released their Authenticator code to the world on their code repository site (, so any other programmers could take up that torch and do what they are doing only better, and then Google could just publish their updated standard to allow the app writers to adjust.

Either way, as much as their two-factor system is a great start for security, in the wake of GPU-based password hash crackers and the bevy of NSA leaks, I think they need to be doing a lot more going forward.


First off, the old DES encryption standard uses a 56-bit key, which with modern hardware, can be deciphered in about 4 minutes (using specialized, but not terribly expensive equipment).  Eighty bits is not much longer, and given an attacker might be able to skim 2 or 3 codes from a user by “shoulder-surfing” his phone at an airport or coffee-shop, he can then use these new GPU-based hash crackers (which are freely available on the Internet, just search “hashcat” as one example), he could brute-force attempt to find out what the 80-bit key is. 

The attacker can verify his efforts, even if he doesn’t know the target’s username/password, by using the time-codes that would have been used by the target’s phone/device when the attacker surfed the TOTP codes to reproduce them (remember, all cellphone companies broadcast the time to within about 30 seconds of most Internet time servers, so that information is not difficult to get at all, and we all know the algorithm Google uses to generate those time-codes, and that the TOTP-codes are produced every 30 seconds by default).

The TOTP algorithm does have a method to try to thwart such a reverse-engineering attempt (the final 32-bits used internally to generate the final 6-digit TOTP-code are taken from different parts of the internal hash based on a value in that hash, so the location changes every time), but at 80 bits, Google’s current secret just isn’t long enough to stave off modern (and especially distributed) brute-force efforts that ignore that extra step.


Google, increase your secret-key lengths, and make your app more adoptable by, I dunno, adopting the full standard in the first place!

Thursday, September 5, 2013

PBKDF2 in Almost Any HMAC Flavor You Want In C# .NET 4

using System;
using System.Text;
using System.Security.Cryptography;

namespace System.Security.Cryptography
    //Generic PBKDF2 Class that can use any HMAC algorithm derived from the
    // System.Security.Cryptography.HMAC abstract class
    // PER SPEC RFC2898 with help from user Dodgyrabbit on StackExchange
    // the use of default values for parameters in the functions puts this at .NET 4
    // if you remove those defaults and create the required constructors, you should be able to drop to .NET 2


    //constrain T to be any class that derives from HMAC, and that exposes a new() constructor
    public class PBKDF2<T>: DeriveBytes where T : HMAC, new()
        //Internal variables and public properties
        private int _blockSize = -1;  // the byte width of the output of the HMAC algorithm     
        byte[] _P = null;
        int _C = 0;
        private T _hmac;

        byte[] _S = null;
        // if you called the initializer/constructor specifying a salt size,
        // you will need this property to GET the salt after it was created from the crypto rng!
        // SALT WILL BE LOST!!
        public byte[] Salt { get { return (byte[])_S.Clone(); } }

        // Constructors
        public PBKDF2(string Password, byte[] Salt, int IterationCount = 1000)
        { Initialize(Password, Salt, IterationCount); }

        public PBKDF2(byte[] Password, byte[] Salt, int IterationCount = 1000)
        { Initialize(Password, Salt, IterationCount); }

        public PBKDF2(string Password, int SizeOfSaltInBytes, int IterationCount = 1000)
        { Initialize(Password, SizeOfSaltInBytes, IterationCount);}

        public PBKDF2(byte[] Password, int SizeOfSaltInBytes, int IterationCount = 1000)
        { Initialize(Password, SizeOfSaltInBytes, IterationCount);}
        //All Construtors call the corresponding Initialize methods
        public void Initialize(string Password, byte[] Salt, int IterationCount = 1000)
            if (string.IsNullOrWhiteSpace(Password))
                throw new ArgumentException("Password must contain meaningful characters and not be null.", "Password");
            if (IterationCount < 1)
                throw new ArgumentOutOfRangeException("IterationCount");
            Initialize(new UTF8Encoding(false).GetBytes(Password), Salt, IterationCount);

        public void Initialize(byte[] Password, byte[] Salt, int IterationCount = 1000)
            //all Constructors/Initializers eventually lead to this one which does all the "important" work
            if (Password == null || Password.Length == 0)
                throw new ArgumentException("Password cannot be null or empty.", "Password");
            if (Salt == null)
                Salt = new byte[0];
            if (IterationCount < 1)
                throw new ArgumentOutOfRangeException("IterationCount");
            _P = (byte[])Password.Clone();
            _S = (byte[])Salt.Clone();
            _C = IterationCount;
            //determine _blockSize
            _hmac = new T();
            _hmac.Key = new byte[] { 0 };
            byte[] test = _hmac.ComputeHash(new byte[] { 0 });
            _blockSize = test.Length;


        public void Initialize(string Password, int SizeOfSaltInBytes, int IterationCount = 1000)
            if (string.IsNullOrWhiteSpace(Password))
                throw new ArgumentException("Password must contain meaningful characters and not be null.", "Password");
            if (IterationCount < 1)
                throw new ArgumentOutOfRangeException("IterationCount");
            Initialize(new UTF8Encoding(false).GetBytes(Password), SizeOfSaltInBytes, IterationCount);

        public void Initialize(byte[] Password, int SizeOfSaltInBytes, int IterationCount = 1000)
            if (Password == null || Password.Length == 0)
                throw new ArgumentException("Password cannot be null or empty.", "Password");
            if (SizeOfSaltInBytes < 0)
                throw new ArgumentOutOfRangeException("SizeOfSaltInBytes");
            if (IterationCount < 1)
                throw new ArgumentOutOfRangeException("IterationCount");
            // You didn't specify a salt, so I'm going to create one for you of the specific byte length
            byte[] data = new byte[SizeOfSaltInBytes];
            RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
            // and then finish initializing...
            // Get the salt from the Salt parameter BEFORE calling GetBytes()!!!!!!!!!!!
            Initialize(Password, data, IterationCount);

            //*DOOT* clean up in aisle 5! *KEKERKCRACKLE*

        // required by the Derive Bytes class/interface
        // this is where you request your output bytes after Initialize
        // state of class Reset after use!
        public override byte[] GetBytes(int ByteCount)
            if (_S == null || _P == null)
                throw new InvalidOperationException("Object not Initialized!");
            if (ByteCount < 1)// || ByteCount > uint.MaxValue * blockSize)
                throw new ArgumentOutOfRangeException("ByteCount");

            int totalBlocks = (int)Math.Ceiling((decimal)ByteCount / _blockSize);
            int partialBlock = (int)(ByteCount % _blockSize);
            byte[] result = new byte[ByteCount];
            byte[] buffer = null;
            // I'm using TT here instead of T from the spec because I don't want to confuse it with
            // the generic object T
            for (int TT = 1; TT <= totalBlocks; TT++)
                // run the F function with the _C number of iterations for block number TT
                buffer = _F((uint)TT);
                //IF we're not at the last block requested
                //OR the last block requested is whole (not partial)
                //  then take everything from the result of F for this block number TT
                //ELSE only take the needed bytes from F
                if (TT != totalBlocks || (TT == totalBlocks && partialBlock == 0))
                    Buffer.BlockCopy(buffer, 0, result, _blockSize * (TT - 1), _blockSize);
                    Buffer.BlockCopy(buffer, 0, result, _blockSize * (TT - 1), partialBlock);
            this.Reset();  // force cleanup after every use!  Cannot be reused!
            return result;

        // required by the Derive Bytes class/interface
        public override void Reset()
            _C = 0;
            _P.Initialize(); // the compiler might optimize this line out! :(
            _P = null;
            _S.Initialize(); // the compiler might optimize this line out! :(
            _S = null;
            if (_hmac != null)
            _blockSize = -1;

        // the core function of the PBKDF which does all the iterations
        // per the spec section 5.2 step 3
        private byte[] _F(uint I)
            //NOTE: SPEC IS MISLEADING!!!
            byte[] bufferU = null;
            byte[] bufferOut = null;
            byte[] _int = PBKDF2<T>.IntToBytes(I);
            _hmac = new T();
            _hmac.Key = (_P); // KEY BY THE PASSWORD!
            _hmac.TransformBlock(_S, 0, _S.Length, _S, 0);
            _hmac.TransformFinalBlock(_int, 0, _int.Length);
            bufferU = _hmac.Hash;
            bufferOut = (byte[])bufferU.Clone();
            for (int c = 1; c < _C; c++)
                _hmac.Key = _P;  // KEY BY THE PASSWORD!
                bufferU = _hmac.ComputeHash(bufferU);
                _Xor(ref bufferOut, bufferU);
            return bufferOut;

        // XOR one array of bytes into another (which is passed by reference)
        // this is the equiv of data ^= newData;
        private void _Xor(ref byte[] data, byte[] newData)
            for (int i = data.GetLowerBound(0); i <= data.GetUpperBound(0); i++)
                data[i] ^= newData[i];

        // convert an unsigned int into an array of bytes BIG ENDIEN
        // per the spec section 5.2 step 3
        static internal byte[] IntToBytes(uint i)
            byte[] bytes = BitConverter.GetBytes(i);
            if (!BitConverter.IsLittleEndian)
                return bytes;
                return bytes;

Tuesday, June 25, 2013

Goodbye, Sprint

So, after 15+ years of being with the same carrier, I'm finally tired of putting up with their shit.  Mostly their prices, the service hasn't been... notable (I'll put it that way).

But this really made up my mind:

Reference Number: [REDACTED]
DATE/TIME: 2013-06-25 11:xx:xx

Your Sprint chat transcript

Megan: Thank you for visiting Sprint. What questions can I answer for you today?

You: Hi Megan, my question is, does Sprint offer anything less that the Everything ___ plans for smart phones? I ask because I have a plan now that has a ton of minutes in it that I never use.

You: I'd rather pay for the data, and a much smaller amount of minutes and texting.

Megan: I am sorry, the Everything Data is the most economical plan we've for smartphones.

You: See, it's the word economical that gets me. Ting and Virgin have plans that more match my actual usage that would range between $55-70/mo versus the $167 I'm paying now.

You: Is there anything that Sprint can do for me to match what they offer?

Megan: I can understand your concern. However the Everything Data is the most economical plan we've with Sprint.

You: I see... For my record, I have two lines with different contract lengths left, what would be my liability for early termination?

You: Are you able to provide an "exact" dollar amount?

Megan: I’m happy to pull up your account and take a look at that for you.

Megan: What is the account's primary telephone number?


Megan: Thank you, one moment please while I pull up your account.

You: ok


Megan: Thank you for holding.

Megan: I appreciate your patience!

You: no problem

Megan: If you cancel your 
[REDACTED, WIFE] line you need to pay $180 as early termination fee.

Megan: If you cancel your 
[REDACTED, MINE] line you need to pay $100 as early termination fee.

You: ok, that line has longer to go, so that makes sense

You: ok, so as of today, that's $280. If I go with a new provider, at only say $60/mo, I'd have that fee paid off in approximately 3 months.

You: I'm assuming those fees go down the closer I get to the end of the contract? If so, by how much?


Megan: From the 15th day of service to the end of the sixth month, the Early Termination Fee will be $350. When you have 17 months remaining, the Early Termination Fee will decrease by $20 with each additional month of service. With 5 months remaining until the end of the term, the ETF is only $100 if you decide to discontinue service.

You: Ok. Thank you. You've helped me formulate a decision that I need to confer with my wife.

Megan: You're welcome.

Megan: Are there any additional questions I can help with today?

You: Not that I can think of at the moment. Please attach a copy of this conversation to my account. And have a wonderful week. Thank you for your time. :)

Megan: You can have a copy of this chat conversation sent to your Email address by clicking the envelope icon in the upper right hand corner.

You: I will do that, thank you.

Megan: Thank you for visiting

Megan: My name is Megan and it has been a pleasure assisting you. Have a great day.

You: Take care, Megan :)

Megan: Bye and take care.

She was helpful, courteous, patient, and I have no complaints on this conversation, aside from their ridiculous prices, which is not HER fault.  But my mind is made up... Ting uses the same network (they buy it from Sprint, which is kind of ironic on some levels), so the service should be the same, but $100 cheaper /mo.

Time to say "Ting, how you 'doin?"  I just hope they let me keep my Detroit number...

[UPDATE:  Easiest carrier change I've ever done!  And I got to keep BOTH numbers I had with Sprint even though one is from Detroit.  The number portability act makes that possible.  I had to have Ting start the process to pull the numbers from Sprint so I could keep them, which meant giving them my social security # and the account pin at Sprint.  This always makes me cautious by nature, but I know why they did it.  They had to have proof that I was indeed switching to a new carrier, and not some scammer trying to steal my service. 

TIP:  DON'T CANCEL YOUR PLANS/ACCOUNTS BEFORE YOU START WITH A NEW CARRIER, otherwise you'll lose your number.  If your account is closed or grossly overdue, they have no obligation to let you keep your number!

Meanwhile Sprint responded to my tweet about switching:
I told them their fees were "crap" and that they don't offer any reasonable plans.  Of course I learned about Sprint being acquired by a Japanese telecom [warning: video] only AFTER I switched (this isn't new news, but it was news to me, I was clueless), and that they'd be offering new plans soon, but the new plans are still too expensive, and I'm done with constantly having a contract hovering over my head. Too Little, Too Late]

(use promo code at to help out the Kevin Pollak Chat Show if you want, it's worth $25 bucks to you, and Kevin has a great podcast if you haven't listened to/watched it yet, DO IT!  When that one stops working, use mine, you get $25, I get $25: Referral)

Sunday, June 2, 2013

CAML Queries for finding Document Sets

So you have Document Sets in your library, and you need to list only them, or you are trying to find one in particular.   But when your CAML queries are looking for specific document sets and field values, they turn up nothing.

Well there are two ways around this.  The first involves using a query for a folder object type.

  <FieldRef Name='FSObjType' />
  <Value Type='Number'>1</Value>

This looks for folders specifically, but it also seems to help with Document Sets.  This one is also well documented.

There is another way I found when I was using a tool that wouldn't let me pick FSObjType as a field because it really isn't a field so much as a Content DB... thing.  There is a hidden field called "Level" (internal name "_Level" which you must use in CAML) which appears to indicate the folder level within the library where an item resides.

So if you have a content type derived from Document Set called... ummm... "My Document Sets" for example, and they reside in a library at the root folder, you can find the one you want doing something like this:

     this is where you would search for some other criteria, like a name, title, description, whatever
      <FieldRef Name='_Level' />
      <Value Type='Number'>1</Value>
      <FieldRef Name='ContentType' />
      <Value Type='Text'>My Document Sets</Value>

With the Level field specified it seems to force SharePoint to include folders at the level specified.  This will get you around using some other tool or product that won't let you specify FSObjType, but if that tool or product can find hidden fields, Level should be in there.

Good luck.

Friday, May 17, 2013

Why is my InfoPath form failling to publish as a Site Content Type in SharePoint 2007/2010?

First of all, this assumes you have a form in InfoPath (any version older than 2013, I haven't tested that one), and that you want to publish that form into a SharePoint site as a Content Type.  This is advantageous to the site, because that form can be reused as much as you want in the Site Collection (although they work only so well in Content Type Hubs, the reason being the template XSN file crosses the Site Collection barrier, which is generally a no-no).

BUT:  You need to be careful how you document your content type in the Description box in InfoPath.  Here is where the Office team messed up:  They don't do any encoding or sanitizing of the text in that box before sending it off to the Webs web service in SharePoint (Webs.asmx).  As a result, any XML tags, or HTML tags, or hell, ANY stray double quotes (") or greater-than/less-than symbols (<) (>), will make their way into the call, and gum up the whole works.

The reason this fails is the call to the Webs.asmx service is SOAP, which is pure XML, and the description is sent as an Attribute, not as a text node value.

Here's the real rub, though: if you DO happen to have illegal characters in there, InfoPath won't error out.  Instead it will successfully publish the form to the site, but then prompt for credentials over and over and over again for the web service.  If you're foolish enough to continue clicking Ok you'll eventually lock out your AD account in the process.  When you click Cancel, it will say "Creating the site content type failed."  That's it.  No help at all.  What it should say, instead of prompting at all is "SOAP error" or "400: Bad Request" which is what is actually happening in the background.

What it took to find this little sucker was installing Fiddler on the local machine and watching the traffic going out to the website.  I was seeing HTTP 400 errors when the web calls were being made, and in looking at the RAW SOAP XML request to the service, that's where I saw the Description attribute, and it was invalid, thanks to double quotes in my description.

I lost 4 hours to this error, and had to drag in a sys-admin to help me figure out Fiddler to see the raw data.  I hope this helps someone else...

Friday, May 10, 2013

Friends don't let friends use RC4 (UPDATED: EVER!)

And there are a myriad of reasons why.  Despite its great speed and small footprint, its simplicity also leaves it wide open to faults and attacks.  So it's no longer recommended for any new implementations, and existing implementations should stop using it.

UPDATED SEPT 17 2013:  Looks like the official recommendations are coming down now for that to happen after all these NSA revelations.  And it gets worse.

But here's the rub.  It was so widespread during the early days of many Internet and Wireless protocols (and is STILL used), you actually CAN'T get rid of it entirely, and you can't improve upon it without breaking compatibility with older systems.

This is probably the most frustrating aspect of ALL crypto-systems:  Something that is great one day, can be (probably) broken in a short time frame (5-10 years or less), but is SO great, that it gets put into EVERYTHING before all hell breaks loose, and the industry as a whole is slow to adopt new tactics and algorithms, leaving the rest of us with "Swiss cheese security."

Take DES, as an example.  It was developed a long time ago when computers were still HUGE, and desktop computing hadn't really happened yet.  It was quickly broken as home computers became more and more powerful in the late 80's, and as cryptanalysts developed new techniques for getting algorithms to give up their secrets (quite literally) through careful bit manipulation and extensive comparison of inputs and outputs, it became clear that a new algorithm was needed.

In the move to get rid of DES, it was so prevalent, that the best anybody could do at the time (since DES was an adopted NIST FIPS standard) was to simply implement it 3 times in succession with a longer overall key (once forward, once backward, once again forward, each with a different, usually independent, piece of the overall key).  Thus "Triple DES" or "3DES" was born.  Hardly any new hardware required, and very little new software, just use what you have 3 times.

This was... ok... except it was later determined that the process of having the center transformation reversed caused some of the bits in the key to "not matter."  Instead of 168 bits of security, you basically only get 112.  Bummer.

And NIST calls it, effectively, 80.  Double bummer.

And with modern hardware, a single DES key can be broken by brute-force in a matter of minutes or hours.  Triple bummer. (see what I did there?)

Then came AES, which was chosen after a long competition held by NIST with some of the best crypto minds in the world (it was adopted in 2002 officially after an exhaustive process starting in 1997).  It's only just now starting to show cracks (in 2011 or 2012, I can't find the original article, someone figured out that in the 256 bit version, the key expansion schedule had some peculiar behavior that COULD someday lead to an attack).

AES is now one of the most adopted algorithms in software*, because it's free, open source, and actually required for many US Government processes. (*I'm making an educated guess here based on my own experience.)  It's even found its way into some Intel CPU architectures as a stand-alone Arithmetic Logic Unit (ALU), right alongside all the other key parts of the CPU.  Try changing that out in all the PC's in the world if it were ever broken!  UPDATE Another CPU was announced recently!  I just don't have the link handy, but many have Tweeted about it.

But I'm getting away from the plot...

RC4 was one of those algorithms that emerged and showed great promise due to its great speed and small memory footprint (less than 280 bytes in memory).  It was so fast, in fact, that it was adopted when the first short-range, wireless connectivity standards were being implemented (remember 802.11a and WEP?).  It does, however, still require a license (as of the time of writing this article) from RSA, the original developer.

The trouble is, by the time all the faults were discovered, 802.11a and all of its children were already in VERY widespread use.  The idea of forcing manufactures and software vendors to change hundreds of thousands of devices (many of which had no way to "flash" new programs) was completely out of the question.  New standards, which overcame SOME of the flaws, were later adopted (enter WPA, WPA2Personal/Enterprise), but the damage had already been done.

RC4 is dead!  Long live RC4!

Yup, it's still out there, in use, in production, and it shouldn't be... but I digress.

One thing I will say, though, is that for quick-and-dirty encryption (read weak and probably easily cracked), its speed is largely unmatched.  So even I dipped my toes in the water with a way to try and improve it, but I seriously doubt my way is any better than the last attemptsNope ANY use of RC4 at this point is crap!  Just don't do it!

But, if you need to use something small and lightweight... *wince* I can't recommend anything else that fits the same memory/code footprint *wince*. I would normally say use BlowFish, TwoFish, or even ThreeFish, but... *shrug* for this discussion, we're going "cheap"...

UPDATE:  Yeah, I'm essentially striking this from the record.  Even this "improvement" is just polishing a turd.  JUST DON'T USE RC4!!!!

For starters, don't use the standard version in your final implementation (and not at all if you don't obtain a license from RSA; remember, this algorithm is still under their control).  Do this instead:


Start with the standard version, just to build up the base, but remember that its flaws are numerous, and mostly surround the starting state, and the first 1000 bytes or so of output.  From the standard, deviate with the following:

1.  Input key is passed through [RIPEMD-160] hash first before the state is initialized.
    (any decent hash will do, just stay away from MD5 and SHA-1; you don't need to go to
     full SHA-512, or Skein, or Blake, or Keccak or anything like that, this is quick-
     and-dirty after-all, right? Use all the bits that are output from the hash)
2.  Initialization is increased from 1 pass through the state buffer to 3

    (basically, increase from 256 key-based swaps to 768).
3.  First 4096 bytes of stream output (without salt) are dropped.

    (this hides the initial state sufficiently enough*, and costs very little in
     processor time)
4.  A salt is added to all final output bytes. 
    (a) The salt is defined as a [RIPEMD-160] HMAC buffer (keyed by *original* input key)

        of the state as defined by steps 1-3 above.  
        (again, any hash in HMAC mode will work here, but use the same as the one above,
         less code that way)
    (b) A single persisted byte 'R' determines the index from the salt buffer to be used

        next, and is initialized to 0 after steps 3 and 4(a).
    (c) The salt byte at index 'R' is XOR'd with the next stream output byte, which is

        obtained from the RC4 algorithm as normal.  'R' is then incremented, modulus the
        length of salt buffer.
    (d) Repeat step (c) as needed.


Tuesday, April 16, 2013

Excel Column Name as a 0-based Index Number

This function is handy if you want to use packages like NPOI or EPPlus or some other Excel manipulation package.

Basically find the numeric value of the letter, and multiply it by the appropriate power of 26.  Think of it like Base-26 numerals converting to decimal (or Base-10).  The process is basically the same.  We have to subtract 1 at the end because A is used like a 1 rather than a 0, and in Excel, the columns start from 1, not 0. So A=0, B=1,C=2, ... AA=26, AB=27, AC=28, ... ZA=676, ... AAA=702, etc.  Although there is a limit to the number of columns in Excel 2007/2010 (I just can't remember it off the top of my head, and it's also dependent on if you are using 32bit Office or 64bit).

/// <summary>
/// Returns 0-based column number from Excel column name 

///   string, like "G" or "ZD" or "DAJ".
/// </summary>
/// <param name="column"></param>
/// <returns></returns>
private static int GetIntFromColumnName(string column)

    column = column.ToUpperInvariant();
    const string alpha = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
    int multiplier = (int)Math.Pow(26, column.Length - 1) ;
    int result = 0;
    char[] stuff = column.ToCharArray();
    for (int scan = 0; scan < column.Length; scan++)
        if (!alpha.Contains(stuff[scan]))
            throw new ArgumentOutOfRangeException();
        result += (alpha.IndexOf(stuff[scan]) + 1) * multiplier;
        multiplier = (int)(multiplier / 26);
    return result - 1;

Thursday, April 4, 2013

555 Based Constant Current Charge Circuit (UPDATED)

Ok, so first things first, I didn't come up with this.

Secondly, I'll probably be posting a followup and/or gutting this post to update any major changes, which, already I found one doing some troubleshooting.  Each update will be in red, and outright fallacies will be struck out or removed. 

UPDATE May1st, 2013:  Thirdly, the fab house where I bought my boards were themselves bought by another company (see below).

I'm borrowing heavily from the work of Kenneth Finnegan and Mike Davis.   Mike came up with the idea of using the comparators and the latch inside of a 555 timer chip to do a charge/dump cycle for a battery charger powered from a solar panel.  And Kenneth extended it a bit so that it's now a constant current charger.

The idea is that when a lead acid battery falls to 11.5V, it's considered discharged (at least if it's being stored, as lead acid batteries don't like being discharged below like 9V unless you're talking about a marine battery or an AGM battery).  So the Trigger comparator (pin 2) looks for a low battery voltage set there (through a voltage divider), and then kicks the 555 on, which switches on an LM317 constant current circuit.  Then the Threshold comparator (pin 6) will kick the 555 off again when the battery rises to about 14.2V, which is the point where most lead acid batteries should stop charging, although your mileage will vary (it should be written on the battery).

This does not go into the constant voltage stage of a charge cycle after the 555 kicks off, and there is resistance between positive and ground of the battery through the voltage dividers, so it WILL eventually drain the battery, BUT it should kick back in to charge it back up.  So as long as there is power: No problem!

Go here for a great talk on lead acid batteries from Ken for more details on the whole process and the chemistry of these batteries.  And also this great primer from Bill Hammack the Engineer Guy.

So, here is the circuit (see Kenneth's blog and Mike's post for the originals).  (This is the prototype I had made, not the current circuit I'm working on, although if you can get this to work for you, please drop me a line with anything you learned along the way, as I seem to be getting different behavior from Ken)

I'm a bit of a n00b at Eagle, so there's extra pins for the pads where you connect the supply voltage and the battery connections instead of nice connectors or screw terminals.  *shrug*  It works, which is a win in my book, so... It is what it is.

The 7805 acts as a stable voltage reference against which the 555 checks the battery at VOUT.  The trim pots need to be adjusted so that, when the VOUT pin is at 11.5 volts (the trigger or ON point for the battery to start charging), the voltage being fed into Pin 2 is about 1.667 volts (test point 1).  Then adjust the other trim pot with VOUT at 14.2V so that the voltage at Pin 6 is at about 3.333 volts (test point 2).  This should calibrate the charger for most lead acid batteries.

According to Ken, when the 555 output goes low (OFF), the LM317 constant current circuit shuts down, and the 1N5818 diode prevents damage to the rest of the circuit.  Basically, with the Adjust pin pulled down through the transistor (which is there to prevent over-voltage to the 555), the voltage regulator output goes way down, to the point where the diode is reverse-biased.   When the 555 goes high, the transistor stops conducting and the Adjust pin floats relative to the battery, and the current sensing resistor (R1).  The current sensing resistor keeps the adjust pin just at the right point to bring the voltage high enough to conduct to the battery, but not so high as to draw too much current.  Ken indicated on his blog that he has a switch to kick over to a 3ohm resistor instead of a 1.5ohm to allow for slower charging (half of the current).  He doesn't have it listed in his circuit, and I opted to not use it either.

UPDATE:  I've since come to realize that the 555 might be asked to do too much work with driving the LEDs and sinking the adjust pin of the LM317.  I've replaced the NPN transistor with a PNP transistor that pulls the LM317 directly to ground, rather than through the 555.  The 555 now drives the base of the transistor through the 4.7k resistor that was there before, rather than providing the ground path through itself.  I noticed that if the charging LED was pulling too much current from pin 3, it was enough to start "dragging down" the LM317 through a partially conductive transistor and it would fall well below the battery preventing it from actually charging.

Here is my prototype which I ordered from BatchPCB.  I had to send them the gerber files from my Eagle board file and wait about 2 weeks (note this board is the one currently uploaded on BatchPCB, not the completed prototype pictured below, which was version 1.0, this is 1.1). I'm already up to 1.2, which isn't on BatchPCB yet with the transistor change mentioned above.   

UPDATE May1st, 2013:  Also, see this note.  BatchPCB has been bought out by OSH Park. Looks like I'll either need to follow their guide on migrating accounts, or just wait, and submit a board after I've done all my major edits and other proofs.  This thing is difficult to troubleshoot, and my bench equipment sucks...

BatchPCB is used to be run by the people at Sparkfun Electronics. They have had a great service, but you should ONLY use it for prototypingThere's a $10 setup fee, and the individual board prices are... a bit high.  But it's it was MEANT to be a prototyping house so that you don't have to buy 20 boards only to find out there are huge mistakes (like the one's I've listed or worse).  They scooped up a number of orders and "panelize" them so they all get made at once, hence the wait.  (The new service seems to offer somewhat quicker turn around but still using the same "bundle peoples' orders to save costs" scheme, and they also sell boards in "sets" so you get - at the time of writing this - 3 boards in a single order, which is actually cool) I got two in case I had to do green-wire surgery (turns out I do).  Once you've proven out your board, go to a full-blown fab house to bring the cost way down.

Honestly, though, I'm really happy with the results. (OSH Park uses gold on purple, and they have tighter tolerances, so... we shall see... I might make this an SMD board after all. Then again I might just approach SparkFun and have them make/sell them)

This isn't the end of this board, I'm still making tweaks (as indicated above).  Such as adding a filter cap on the output to stomp noise from reaching the 555.  You can already see some changes from the silk screen (yellow) from the photos.  I had to switch the LEDs, as they were labeled backwards, so you'll see lines drawn on the photos.  I also had to move the mounting holes.  Turns out there's this little thing called "clearance" you have to watch out for [/sarcasm].  But that's what a prototype is meant to weed out.

It needs an enclosure, which might also help with the noise-susceptibility problem, and ideally a fuse or two. And, I need like, 8 more of them... and the same number of power supplies (*yikes*).  I have almost a dozen of those SLA's from old UPS equipment I've repaired for friends.  I use them for portable power and to bilge out my sumps if I lose power (thank you marine supply store!).  The hard part is keeping them charged for long stretches, and these boards are just what I needed... assuming I get more...