A Personal Rant on Trading Bitcoin

I would say the summary of my trading practices thus far can be summed up into, “I’m an idiot.”

It’s up. it’s down. Oh, my gosh I could have made $10k. Oh, no I’ve lost $300. Ack! What does everyone else think?… Just an average day in a Cryptocurrency trader’s life. But not for the above average trader’s life. Much of the same advice you’d get from a financial planner is applies to Bitcoin and other currencies.

Research the Company

Who are the developers of the Currency you’re thinking about buying? Who are their investors? Have they successfully launched a currency before? Have previous projects they worked on failed in a spectacular fashion?

There are a lot of motivations for attempting to create a new Altcoin. Notoriety, solving social or economic problems, or greed are some of the most popular themes of Cryptocurrency. Following the successes or failures of a development team will guide you in figuring out what motivates them. Don’t get stuck into thinking greed is bad motive. Several self-interested projects made a lot of money for the development team and early investors who knew when to sell.

Research the Product

The development teams are going to market their Cryptocurrency to garner investment interest, adoption, and higher trading prices. That makes it easy to find information like what problems are they trying to solve or new Blockchain technology are they trying to introduce? Are they trying to bring a solution to Apple products or Mobile devices where others aren’t?

Invest for the Long Term

If you’re full time job is staring at charts and day trading, you can still do that with cryptocurrencies. You just need to adjust to the increased volatility. By Volatility I mean 40% up or down in a day… 30 minutes even. But if your trading on the intraday bumps you might find a higher portion of your profits going to fees and splits. So, I say invest for the long term. If I had followed the advice in these sections, I’d have a lot more disposable income.

Personal Stories

I met a guy while working at Dell that told me a story of the $300,000 240 MB Hard drive he bought. Yes, MB. Well he cashed in some of his employee purchase plan stock for a new hard drive when the stock wasn’t worth all that much. At the time of telling me the story the Hard Drive was worthless and the amount of stock he sold for it was worth $300,000. Oh, how we laughed. And now I’ll relate for you the story of the guy who bought a pizza with Bitcoin when it was worth pennies and now that Bitcoin would be worth Millions. You’d think I’d learn from others, but I too have purchased a $700 tablet for $4,000 worth of Bitcoin at today’s prices.

But I think more disappointing are opportunities I missed due to fear.

Stratis is an Altcoin someone pointed out to me in December of 2016. The price was under $0.05. I thought well let’s wait and see what happens. The interesting thing about Stratis is the development team’s partnership with Microsoft and building their platform on the .NET framework. This means that the products a developer would write to interact with their Blockchain technology can run natively on Windows Operating Systems without a lot of additional translation code or “wrapper” code. The price went up to something over $0.07 and I said, “Ok I’ll buy some.” And invested $300. I woke up one morning a few weeks later and the price is over $0.30, and has been hovering between $0.40 and $0.50 the last two weeks. The currency had a lot of earmarks of something I thought was a good investment, and I kick myself for not putting in $1500 or more at the $0.07 price.

DASH, which was launched as Dark Coin, I used to mine. The name Dark Coin certainly sounded cool to the kids and it was marketed as the first truly anonymous currency because the network had the function called mixing where your bitcoin cold be split up and mixed with fractions from other Dark Coin on the network without additional entries in the blockchain thus removing the traceability of the transactions. When fintech investing in Blockchain technologies started becoming serious business they grew up and changed the name to DASH. I had mined 8 DARK when I had a hard drive failure and said well I won’t both with that currency any more. At that time, Dash was only worth around $1.00, so I was out maybe $10. At the same time Stratis had its big jump DASH goes to $100 and has stated above $50. Now why didn’t I keep mining when the difficulty was low and amass a vast fortune? I was able to restore my Dark Wallet from a backup and retrieve my 8+ DASH, but I could have had 100 over the course of that year.

Check out the stellar raise of PIVX. I looked at it when it was less than $0.03. It’s trading at $1.38 today… $1,500.00 would be worth over $100,000, and it happened extremely fast.

Stay tuned!

Wikipedia – Bitcoin

https://en.wikipedia.org/wiki/Bitcoin

Bitcoin Forum – The most popular place to discuss all Cryptocurrencies

https://bitcointalk.org/index.php

Cryptocurrency Trading Charts

https://coinmarketcap.com/

Most Profitable Mining Calculations

http://www.coinwarz.com/cryptocurrency

Some Exchanges

https://poloniex.com/

https://btc-e.com/

https://www.gdax.com/

https://www.bittrex.com/

 

 

 

 

 

Mining Cryptocurrencies

Cryptocurrencies like Bitcoin, well almost exclusively Bitcoin, have been getting some play time in the popular media. Hopefully you understand what they’re talking about, if not check back to my earlier post “A Change in Direction”. Now you might be asking yourself, “How do I get some?” One option is to “Mine” your Cryptocurrency

Mining Cryptocurrencies is essentially using electricity to generate Cryptocurrency. Not all Cryptocurrencies can be mined. For those that can, the process involves looking for a new block in the currency’s blockchain. The miner who finds the block is rewarded with some units of the currency.

The process of looking for a block involves math, complicated and difficult math. The sort of math that would take you or me 30-45 minutes to do by hand with a calculator. Additionally, the more miners doing the same math for a currency increases the “Difficulty” of the math. New or less popular currencies are easier to mine until they become more popular. The more miners or powerful mining hardware working away at the currency, the more the completion rate of the calculation must be slowed down to keep the reward rate stable. The “Difficulty” of the calculations is proportional to the “Hash Rate” of the currency’s network and must adjust to speed up or slow down the reward frequency.

Mining can be done on a computer’s CPUs, though the video card(s’) GPU(s), or with application-specific integrated circuit (ASIC) hardware.

CPU Mining

CPU (Central Processing Unit) mining is the easiest to write software and therefore usually the first option made available when a new currency is made available. It’s also extremely poor performing. By poor performing I mean the Hash Rate (Number of calculations performed in a second) is not high enough usually to pay for the amount of electricity used to run the CPUs. What? Why is that? Remember, “The process of looking for a block involves math, complicated and difficult math.” A computer’s CPU isn’t built for just doing math calculations. A CPU also responsible for sending commands to all the hardware in a system (network card, sound card, USB devices, etc.)

GPU Mining

GPU (Graphical Processing Unit) mining is a little harder to write software for because there are so many variations of Video cards. New CPUs are released every few years and the command structure doesn’t really change, just the speed at which those commands are executed. Additionally, there are only two CPU manufacturers (AMD and Intel). New Video Card Hardware is released every year, sometimes twice a year, and while there are two main video card platforms (AMD, and NVIDIA) the platforms are manufactured by 20+ different providers. GPU mining software must be compatible with several different versions of each platform’s hardware and updated whenever a new version is released. The payoff is a higher Hash Rate. Why are GPUs better for this? First, a GPU is engineered to do a lot more complex math. All those complicated lifelike video games you love, require a lot of math to display and manipulate the visual environment.

ASIC Mining

Application-specific integrated circuit (ASIC) are platforms where the Processing Unit is designed to perform only one action. In the specific case of Cryptocurrency mining, and ASIC miner is a chip that only has the instruction required to perform the complicated calculations for a designated set of currencies. These devices are the most efficient for mining Cryptocurrencies because they can’t do anything else like send commands to your hard drive or tell the back ground of a first-person shooter to move as you hold down the ASDW keys.

I will post a more in depth discussion about my personal experiences and the experiences reported by others in trying to use ASIC mining hardware. I will just provide this cautionary teaser: AMD, Intel, and NVIDIA are old, well established, insured, companies with large customer bases. Cryptocurrency ASIC manufacturers are largely new companies without any reputation or large amounts of R&D funds, and in many cases, no support whatsoever.

Up next, my fun experiences with ASIC miners and then a detailed guide into mining software.

Stay tuned!

Wikipedia – Bitcoin

https://en.wikipedia.org/wiki/Bitcoin

Bitcoin Forum – The most popular place to discuss all Cryptocurrencies

https://bitcointalk.org/index.php

Cryptocurrency Trading Charts

https://coinmarketcap.com/

Most Profitable Mining Calculations

http://www.coinwarz.com/cryptocurrency

Some Exchanges

https://poloniex.com/

https://btc-e.com/

https://www.gdax.com/

https://www.bittrex.com/

 

 

 

Congratulations American Airlines

I did not think it possible. American Airlines has beat all the odds and actually made the center seat more hellish! Some of you may not remember, roughly 10 years ago there were several law suits, some not valid, brought against airlines for the cramped travel conditions. Doctors even had a name for the injury done to frequent fliers who had to sit in the wrong position for so long. American Airlines, almost immediately, yanked out 2-3 rows of seats to spread them all out giving everyone more leg room.

On Monday I flew 4 hours in the center seat of a 1 month old aircraft ran by American and I can safely say the  leg room has been reduced back to pre “we want your business” levels. Not only was the distance between the seats ridiculously small, but the space under the seat in front was reduced 1/3 to accommodate some idiotic metal box that was bolted to the seat support that was also offset 3+ inches from where it should have been.

There was no lumbar support in the seat back. I was writhing to gain relief between lower back pain, upper back pain and calf spasms the entire journey. They only make the aisle seats available to customers with status. Well there’s a great way to discourage a person from flying with you enough to earn status. They’ll be wheelchair bound by 25k miles.

PostgreSQL, AWS, and Musical Bottlenecks

I have had the misfortune of working with PostgreSQL for the last 8 months. Working is a relative term, for me little work has been done mostly I’ve been kicking off queries waiting forever fo the returns and then trying to run down the bottleneck.

I am not a Linux professional and have to rely on those professionals to diagnose what’s going on with the AWS instance that runs PostgreSQL 9.3. Everyone who looked at the situation has had a different opinion. One person looked at one set of performance data and said the system isn’t being utilized at all, someone else would say it’s IO bound, still someone else would say it’s the network card… So we wnet through all these suppositions added more RAM, then more processors, then we used the SSD drives more, finally switching from Non-provisioned IOPS to Provisioned IOPS got the system roughly as far as we could push it to where the complex queries would drive one CPU Core to 100%.

Now those of you who work with read enterprise RDBMS might say, “Wait… One CPU core reached 100%?” Well yes, of course, because you see PostgreSQL does not have parallel processing. Yeah…

No matter how many CTEs or sub queries present in a query statement sent to PostgreSQL, The processing of said query will happen in a synchronous, single threaded fashion on CPU core. I’m thinking SQL Server had parallel processing in the late 90’s or early 2000’s? It’s 2014 for crying out loud.

And it gets better! According to my observations, the Postgres process is also single threaded. This process is responsible for writing to the transaction logs. So there isn’t any benefit to create multiple log files for software striping and efficient log writing. In fact, one big insert seemed to back up all the smaller transactions, while the first insert wrote to the transaction log.

This is one of the joys of Open Source offerings. If the development community doesn’t think a feature is important you have to fork the code and write the feature yourself. What blows me away is that companies are willing to gamble the success of their products and implementations on something so hokey.

I’m not a DBA, But I Play One on TV: Part 3 – Database Files

When a customer invites me to review their SQL Server or Oracle databases and server architecture, I start with the servers. I review the hard disk layout and a few server settings. The very next thing I do is review the data files and log files for the databases. In the case of SQL Server, when I see one data file and one log file in the same directory and the database has one file group called Primary, I know I am once again presiding over amateur hour at the local chapter of the Jr. Database Developer Wannabe Club.

 

One file pointing to one file group indicates to me:

  1. Someone went through the “create new database” wizard.
  2. There wasn’t any pre-development design analysis done before the database was created
  3. No one bothered to check readily available best practices for SQL Server
  4. I can anticipate equally uninformed approaches to table and index design and query authoring

 

This will antagonize the hardware striping advocacy group, but there are reasons to split up split up your data files and log files. Specifically in the case of TempDB files, you can greatly improve performance by creating the same number of log files as you have processors. With this configuration each processor will control the I/O for each file.

 

Check out number 8 here: http://technet.microsoft.com/en-US/library/cc966534

 

In addition to performance, recovery processes greatly benefit for splitting up the database files. Previously, if a data file failed, if everything was in one file or not, SQL Server would take the database offline. With SQL Server 2012 a new feature was added that will leave your database accessible, just not the data located in the corrupt or otherwise unavailable file. Well if all the data is in that one file your dataset is down until you can recover. Even if that data file contains only a subset of the data in a table, the rest of the data in that table is still available for querying.

 

Now, you might say ok we’re going to have a separate file for every table and multiple files for some. Ok, I’ve seen that configuration and there isn’t anything wrong with it. If your IT department isn’t using SQL Server to manage their backups, instead they’re backing up the actual files across all the drives, they’re going to be annoyed with you. However, this configuration gives you maximum flexibility.  For instance, placing tables that are commonly used at the same time on different spindles won’t conflict for disk I/O.

 

Splitting up your log files is also beneficial. Log files are populated in a round robin fashion. When one reaches the level you’ve set it starts filling up the next. Hopefully you have at least 4 and they are of a sufficient size. This gives you time to archive the transaction logs between backups making sure no transactions are lost due to the file rolling over before the backup removes completed transactions and shrinks the file.

 

Next episode will cover backup basics. The purpose in all these posts is to provide the understanding to apply the best configuration to the database system your building.

 

I’m not a DBA, But I Play One on TV: Part 2 – CPU and RAM

In Part 1 I discussed SQL Server and Hard Disk configurations. Now let’s have a look at CPU and RAM. This topic is actually kind of easy. More is better… most of the time.

CPU

It’s my opinion that most development environments should have a minimum of 4, 2.5+ GHz Processors, If that’s one socket with two cores, or one socket with 4 cores or, or two sockets with 2 cores, doesn’t really make that much of a difference. For a low utilization production system you’ll need 8, 2.5+ GHz processors. Look, you can get this level of chip in a mid-high grade laptop. Now if you’re looking at a very high utilization system it’s time to think about 16 processors or 32 split up over 2 or more sockets. Once you get to the land of 32 processors advanced SQL Server configuration knowledge is required. In particular you will need to know how to tweak the MAXDOP (Maximum Degree of Parallelism) setting.

Here’s a great read for setting a query hint: http://blog.sqlauthority.com/2010/03/15/sql-server-maxdop-settings-to-limit-query-to-run-on-specific-cpu/

And here are instructions for a system wide setting: http://technet.microsoft.com/en-us/library/ms189094(v=sql.105).aspx

What does this setting do? It controls the number of parallel processes SQL Server will use when servicing your queries. So why don’t we want SQL Server to maximize the number of parallel processes all the time? There is another engine involved in the process that is responsible for determining which processes can and cannot be done in parallel and the order of the parallel batches. In a very highly utilized SQL Server environment this engine can get bogged down. Think of it like air traffic control at a large airport… but there’s only one controller in the tower and it’s Thanksgiving the biggest air travel holiday in the US. Well the one air traffic controller has to assign the runway for every plane coming in and going out. Obviously, he/she becomes the bottleneck for the whole airport. If this individual only had one or two runways to work with, they wouldn’t be the bottleneck; the airport architecture is the bottleneck. I have seen 32 processor systems grind to a halt with MAXDOP set at 0 because the parallelism rule processing system was overwhelmed.

For more information on the parallel processing process: http://technet.microsoft.com/en-us/library/ms178065(v=sql.105).aspx

RAM

RAM is always a “more is better” situation. Keep in mind that if you don’t set the size and location of the page file manually, the O/S is going to try and take 1.5 times of the RAM from the O/S hard drive. The more RAM on the system, the less often the O/S will have to utilize the much slower page file. For a development system 8GB will probably be fine, but now a days you can get a mid-high level Laptop with 16GB even 32GB is getting pretty cheap. For production 16GB is the minimum, but I’d really urge you to get 24GB. And like I said 32GB configurations are becoming very affordable.

I’m not a DBA, But I Play One on TV: Part 1 – Hard Drives

This is the first in a series of posts relating to hardware considerations for a SQL Server 2008 R2 or later server. In Part 1 – Hard Drives I’m going to discuss RAID levels and what works for the Operating System (O/S) versus what works for various SQL Server components.

As a consultant I always go through the same hardware spec dance. It sounds like this:

Q: How much disk space does your application database require?

A: Depends on your utilization.

Q: Ok, what’s the smallest server we can give you for a proof of concept or 30 day trial?

A: Depends on your utilization.

Q: Well we have this VM with a 40 GB disk, 8 GB RAM, and a dual Core virtual processor available. Will that work?

A: Depends on your utilization, but I seriously doubt it.

SQL Server 2008 R2, depending on the flavor will run on just about any Windows Server O/S 2005 and newer, Windows 7 and Windows 8. This isn’t really a discussion about the O/S, more of how the O/S services SQL Server hardware requests. At the hardware level the O/S has two main functions managing memory and the hard disks and servicing requests to those resources to applications.

In a later post we’ll look at memory in a little more depth, but for the hard disk discussion we’ll need to understand the page file. The page file has been part of Microsoft’s O/S products since NT maybe windows for workgroups, but I don’t want to go look it up. The page file is an extension of the physical memory that resides one or more of the system’s hard disks. The O/S will decide when to access this portion of the Memory available to services and applications (processes) requesting memory resources. Many times when a process requires more memory than is currently available the O/S will use the page file to virtually increase the size of the memory on the system in a manner transparent to the requesting process.

Let’s sum that up. The page file is a portion of disk space used by the O/S to expand the amount of memory available to processes running on the system. The implication here is that the O/S will be performing some tasks meant for lightning fast chip RAM, on the much slower hard disk virtual memory because there is insufficient chip RAM for the task. By default the O/S wants to set aside 1.5 times the physical chip RAM in virtual memory disk space. For 16GB of RAM that’s a 24GB page file. On a 40GB drive that doesn’t leave much room for anything else. The more physical chip RAM on the server the bigger the O/S will want to make the page file, but the O/S will actually access it less often.

Now let’s talk RAID settings! You may find voluminous literature arguing the case for software RAID versus Hardware Raid. I’ll leave that to the true server scientists. I’m just going to give quick list of which RAID configurations O/S and SQL Server components will perform well with and which will cause issues. I’m going for understanding here. There are plenty of great configuration lists you can reference, but if you don’t understand how this stuff works you’re relying on memorization or constantly referencing the lists.

Summarization from: http://en.wikipedia.org/wiki/RAID

But this has better pictures: http://technet.microsoft.com/en-us/library/ms190764(v=SQL.105).aspx

RAID 0 – Makes multiple disks act like one, disk size is the sum of all identical disk sizes and there isn’t any failover or redundancy. One disk dies and all info is lost on all drives.

RAID 1 – Makes all the disks act like one, disk size is that of one of the identical disks in the array. Full fail over and redundancy.

RAID 2 – Theoretical, not used. Ha!

RAID 3 – Not very popular, but similar RAID 1, except that each third byte switches to the next disk in the array.

RAID 4 – One drive holds pointers to which drive holds each file. All disks act independently buy access by one drive letter.

RAID 5 – Requires at least 3 identical drives. All but one are live at all times the last acts as a backup should one of the other drives fail.

RAID 6 – Like RAID 5 except, you need at least 4 identical disks and two are offline backup disks.

RAID 10 or 1+0 – A tiered approach where two groups of RAID 1 arrays form a RAID 0 array. So two fully redundant RAID 1 arrays of 500GB made up of 3 500GB disks come together to form 1 RAID 0 array of 1TB. Sounds expensive, 3TB in physical disks to get 1TB accessible drive.

At this point I’ll paraphrase the information found here: http://technet.microsoft.com/en-US/library/cc966534

SQL Server Logs are written synchronously. One byte after the other. There isn’t any random or asynchronous read requests performed against these files by SQL Server. RAID 1 or 1+0 is recommended for this component for two reasons 1. Having a full redundant backup of the log files for disaster recovery. 2. RAID 1 mirrored drives support the sequential write I/O (I/O is short for disk read and write Input and Output. I’m not going to write that 50 times.) of the log file process better than RAID configuration that will split one file over multiple disks.

TempDB is the workhorse of SQL Server. When a query is sent to the databases engine all the work of collecting, linking, grouping, aggregating and ordering happens in the TempDB before the results are sent to the requestor. This makes TempDB a heavy write I/O process. So the popular recommendation is RAID 1+0. Here’s the consideration, TempDB is temporary, and that’s where it gets its name from. So redundancy isn’t required for disaster recovery. However if the disk your TempDB files are on fails, no queries can be processed until the disk is replaced and TempDB restored/rebuilt. RAID 1+0 helps fast writes and ensures uptime. RAID 5 provides the same functionality with fewer disks, but decreased performance when a disk fails.

TempDB and the Logs should NEVER EVER reside on the same raid arrays, so if we’re talking a minimum two RAID 1+0 arrays, might be more cost effective to put TempDB on RAID 5.

Application OLTP (On-line Transaction Processing) databases will benefit the most from RAID 5, which equally supports read and write I/O. Application databases should NEVER EVER reside on the same arrays as the Log files and co-locating with TempDB is also not recommended.

SQL Server comes with other database engine components like the master database and MSDB. These are SQL Server configuration components and mostly utilize read I/O. It’s good to have these components on a mirrored RAID configuration that doesn’t need a lot of write performance, like RAID 1.

A best practice production SQL Server configuration minimally looks like this:

Drive 1: O/S or C: Drive where the virtual memory is also serviced – RAID 1, 80 to 100 GB.

Drive 2: SQL Server Components (master, MSDB, and TempDB) data files – RAID 1+0, 100-240 GB

Drive 3: SQL Server Logs – RAID 1+0, 100-240 GB

Drive 4: Application databases – RAID 5, As much as the databases need…

Where to skimp on a development system? Maybe RAID isn’t available either?

Drive 1: O/S or C: Drive where the virtual memory is also serviced, 80 to 100 GB.

Drive 2: SQL Server Components (master, MSDB, and TempDB) data files Application database files, As much as the databases need…

Drive 3: SQL Server Logs, 100-240 GB

Optimal Production configuration?

Drive 1: O/S or C: Drive – RAID 1, 60 GB.

Drive 2: SQL Server Components (master, MSDB) data files – RAID 5, 100GB

Drive 3: SQL Server Logs – RAID 1+0, 100-240 GB

Drive 4: Application databases – RAID 5, As much as the databases need…

Drive 5: TempDB RAID 1+0, 50–100 GB

Drive 6: Dedicated Page File only RAID 1, 40GB. You don’t want to see what happens to a Windows O/S when the page file is not available.

Buffer I/O is the bane of my existence. I have left no rock unturned on the internet trying to figure out how this process works. So if someone reading can leave a clarifying comment for an edit I’d appreciate it. This I do know, the buffer is kind of like SQL Server’s own page file. A place on a hard disk where information is staged before it is written to the memory pool managed by the O/S. If your system is low on memory and using the page file extensively you will see Buffer I/O waits in the SQL Server Management Studio activity monitor. Basically, this indicates that the staging process is waiting on memory to become available to move data out of the buffer and into the memory pool. The query can’t write more information to the buffer until there is space open in the buffer for it. In fact if the query resultset is big enough, the whole system will begin to die a slow and horrible death as information cannot move in and out of memory or in and out of the buffer because so much information is going in and out of the page file. This is why I highly recommend splitting up the disks so that SQL Server does not have to fight with the page file for Disk I/O.

Look if you have 10 records in one table used by one user 2 times a day that VM with a 40 GB disk, 8 GB RAM, and a dual Core virtual processor available is going to do just fine. But you might as well save some cash and move that sucker onto Access or MYSQL or some other non-enterprise level RDBMS.

 

 

To Proc or Not to Proc

I’ve had some interesting conversations and fun arguments about how to author queries for SQL Server Report Services (SSRS) reports. There are a lot of professionals out there who really want hard fast answers on best practices. The challenge with SSRS is the multitude of configurations available for the system. Is everything (Database Engine, SSAS, SSRS, and SSIS) on one box? Is every service on a dedicated box? Is SSRS integrated with a SharePoint cluster? Where are the hardware investments made in the implementation?

Those are a lot of variables to try and make universal best practices for. Lucky for us Microsoft provided a tool to help troubleshoot report performance. Within the Report Server database there is a view called ExecutionLog3. ExecutionLog3 links together various logging tables in the Report Server database. Here are some of the more helpful columns exposed.

  •          ItemPath – The path and report names that was executed in this record.
  •          UserName – The User the report was ran as.
  •          Format – Format the report was rendered in (PDF, CSV, HTML4.0, etc.)?
  •          Parameters – Prompt selections made.
  •          TimeStart – Server local date and time the prport was executed.
  •          TimeEnd – Server local date and time the report finished rendering.
  •          TimeDataRetrieval – Amount of time in milliseconds to get report data from data source.
  •          TimeProcessing – Amount of time in milliseconds SSRS took to process the results.
  •          TimeRendering – Amount of time in milliseconds Required to produce the final output (PDF, CSV, HTML4.0, etc.)
  •          Status – Succeeded, Failed, Aborted, etc.

I always provide two reports based on the information found in this view. The first report utilizes the time columns to give me insight into how the reports are performing and when the systems peaks utilization. The second report focuses on which users are using what reports to gauge the effectiveness of the reports to the audience.

Generally I’m a big fan for stored procedures, mostly because my reports are usually related to a common data source and stored procedures provide me with a lot of code reuse. Standardizing, the report prompt behavior with stored procedures is also a handy tool. A simple query change can cascade to all the reports that use a stored procedure, alleviating the need to open each report and perform the same change. Additionally, I like to order the result sets in SQL not after the data is returned to the report. But that doesn’t mean that you’re not going to find better performance moving some functionality between tiers based on the results you find in ExecutionLog3.

I’m sorry there just isn’t a one size fits all recommendation for how SSRS reports are structured. Which means; 1 you’ll have to do some research on your configuration, and 2 don’t accept a consultant’s dogma on the topic.