Logarithms Explained, and the Associative Property of Multiplication

The Trickle-Down

So much advanced math relies on a firm grasp of basic Algebra and Algebra II.

Today, lets take a look at logarithms!

So what are logarithms? Well, first let’s look at exponential equations, such as $latex 2^x = y$ where the 2 is a base. We all know that for example, $latex 2^3 = 8$. A general form is $latex b^x = y$ where b is the base. Well, with logarithms, the format is $latex log_b y = x$. So for $latex 2^3 = 8$, we would express that with logarithms as $latex log_2 8=3$. Fun, isn’t it! The logarithm is the number that the base is raised to a power by to equal a given number; in the example above, the base 2 is raised by the power 3 to equal the number 8.

So the tricky part is that you get rules like $latex log_b y + log_b…

View original post 850 more words

Posted in Uncategorized | Leave a comment

Jeff Bezos on Leading for the Long-Term at Amazon

Wish Bezos did more of these…

Posted in Uncategorized | Leave a comment

The Fast Fourier Transform

Math ∩ Programming

John Tukey, one of the developers of the Cooley-Tukey FFT algorithm.

It’s often said that the Age of Information began on August 17, 1964 with the publication of Cooley and Tukey’s paper, “An Algorithm for the Machine Calculation of Complex Fourier Series.” They published a landmark algorithm which has since been called the Fast Fourier Transform algorithm, and has spawned countless variations. Specifically, it improved the best known computational bound on the discrete Fourier transform from $latex O(n^2)$ to $latex O(n log n)$, which is the difference between uselessness and panacea.

Indeed, their work was revolutionary because so much of our current daily lives depends on efficient signal processing. Digital audio and video, graphics, mobile phones, radar and sonar, satellite transmissions, weather forecasting, economics and medicine all use the Fast Fourier Transform algorithm in a crucial way. (Not to mention that electronic circuits wouldn’t exist without Fourier analysis in general.)…

View original post 2,500 more words

Posted in Uncategorized | Leave a comment

Predictive Analytics and Spurious Correlations


The spurious correlations site has a lot of interesting charts showing various arbitrary combinations of trends that show strong correlations and yet have no rational basis that suggest causation.  Also, the site has a nice feature to explore other correlations by using the hyperlinks on the chart titles to find other trends that correlate with that topic.  Hidden at the bottom of the main page is a link to an entertaining video that nicely discusses how correlations are different from causation.   Some of my discussion concerns the points he makes in the video.   His video expresses an optimism that humans will always be in the loop to insert sanity after just a brief moment of belief that there could be a causal relationship behind such compelling correlations both in graphic form and in statistical values.   The evidence I see is that optimism is misplaced as illustrated by…

View original post 1,215 more words

Posted in Uncategorized | Leave a comment

Data Analysis Learning Path on SlideRule

Data Science 101

SlideRule is a new startup focused on being on online learning hub. One of the sections of the site allows experts to create “learning paths” for a topic. Well, Claudia Gold, data scientist at Airbnb, created a learning path for data science titled Data Analysis Learning Path . The learning path covers: topics, timelines, resources, and links necessary to acquire the skills needed to be a data scientist.

Happy Learning.

View original post

Posted in Uncategorized | Leave a comment

script to quickly find out which spid is using most CPU and/or IO and what that SPID is doing

Technorati Tags: ,,

Hey Guys,

This is a bit off-topic, and specific to SQL Server, but this script has been very useful to me when the server is running slow.  Run the first part up to the 2nd dashed line all together and then use the identified SPID(s) in the queries below the line to see what it is doing:

------identify spid with highest cpu and io usage-----
SELECT spid, sum(cpu)as cpu
into #temp1
FROM master.dbo.sysprocesses
group by spid
WAITFOR DELAY '0:0:0.3';
SELECT spid, sum(cpu)as cpu
into #temp2
FROM master.dbo.sysprocesses
group by spid

select t3.spid, t4.cpu - t3.cpu diff
from #temp1 t3 inner join #temp2 t4 on t3.spid = t4.spid
order by diff desc

SELECT spid, sum(physical_io)as physical_io
into #temp3
FROM master.dbo.sysprocesses
group by spid
WAITFOR DELAY '0:0:0.3';
SELECT spid, sum(physical_io)as physical_io
into #temp4
FROM master.dbo.sysprocesses
group by spid

select t3.spid, t4.physical_io - t3.physical_io diff
from #temp3 t3 inner join #temp4 t4 on t3.spid = t4.spid
order by diff desc

drop table #temp1
drop table #temp2
drop table #temp3
drop table #temp4
--NOW, to see what the process is ACTUALLY DOING:
--same as Activity Monitor (use from ANY db)
select * from master..sysprocesses where spid=73

--same as 'details' from Activity Monitor
DBCC inputbuffer(73) --from any db

--interesting - similar to above, but with variables, if used, instead of actual values (e.g. @strDate)
--*PLUS* this shows you the CURRENT procedure running, not just the wrapper procedure like above
DECLARE @Handle binary(20)
SELECT @Handle = sql_handle FROM master..sysprocesses WHERE spid = 73
SELECT * FROM ::fn_get_sql(@Handle) --seems to cut off text at some point
--SQL2005 (doesn't always return same as sql2000 format)
SELECT session_id, text
FROM sys.dm_exec_requests AS r
     sys.dm_exec_sql_text(sql_handle) AS s
WHERE session_id = 73
Posted in Uncategorized | Leave a comment

Hello world! (and why you should use the Decimal data type and not Money in SQL Server)

Hi!  This blog is called “The Order of SQL” as both a reference to the community of SQL users I hope to cultivate here and to an upcoming e-book I am working on that explains in simple terms exactly how a SQL query is processed and in what order (no, it is NOT all processed simultaneously).

I will try to stick to ANSI SQL as close as possible, but since my focus professionally is on SQL Server , I may occasionally add in a tidbit or two specific to that platform.  Hence, today, we discuss an important “gotcha” in SQL Server around the currency datatype.

Check this example out:

declare @m money
declare @d decimal(9,2)

set @m = 19.34
set @d = 19.34

select (@m/1000)*1000 as money, (@d/1000)*1000 as decimal

money     decimal
19.30       19.3400000

It should be obvious, that if we start with ANY number (19.34 in this case), and then simply divide by ANY number (1000 in this case), and then multiply by the SAME number we divided with (1000), we should always end up with the number we started with.
So, what happened with the “money” column?  Where did the .04 go?


The answer is related to how the money (aka currency) data type is stored: it has a “scale” of only 4.  This means that it can only store 4 decimals after the decimal point.


So, here is what happened:
19.34/1000 = .01934  BUT we only have 4 decimal places, so, this is truncated to .0193.
THEN, .0193 * 1000 = 19.30!


Some of you may now being saying, “Wait!  The decimal variable has even LESS scale (2).  How did it maintain all the decimals?!”


This is apparently due to the fact that although the variable @d can only store 2 decimals, the intermediate results of a computation involving that variable can indeed be stored beyond that limitation.  The problem with the currency data types is that SQL Server apparently chooses to store those intermediate results at the same scale as the money variable itself, hence the truncation of the 4 from the end of .01934.


The lesson?  Just use Decimal (9,2) for currency — it only  takes 5 bytes of storage.  I have yet to find a reason to extend that scale beyond 2, although I’ve seen some people do it (e.g., Decimal (10,4)).  If you know of an advantage or reason to do that – please let me know in the comments below!


Thanks and keep checking back for my ebook on the “order of SQL” – it will be good — I promise! — and most likely will be free at first, as I hope to get YOUR help in making it better before I sell it – see ya soon!
Posted in Uncategorized | Leave a comment