本文共 4744 字,大约阅读时间需要 15 分钟。
''' 基于物品的协同推荐 矩阵数据 说明: 1.修正的余弦相似度是一种基于模型的协同过滤算法。我们前面提过,这种算法的优势之 一是扩展性好,对于大数据量而言,运算速度快、占用内存少。 2.用户的评价标准是不同的,比如喜欢一个歌手时有些人会打4分,有些打5分;不喜欢时 有人会打3分,有些则会只给1分。修正的余弦相似度计算时会将用户对物品的评分减去 用户所有评分的均值,从而解决这个问题。 ''' import pandas as pd from io import StringIO csv_txt = '''"user","Blues Traveler","Broken Bells","Deadmau5","Norah Jones","Phoenix","Slightly Stoopid","The Strokes","Vampire Weekend" "Angelica",3.5,2.0,,4.5,5.0,1.5,2.5,2.0 "Bill",2.0,3.5,4.0,,2.0,3.5,,3.0 "Chan",5.0,1.0,1.0,3.0,5,1.0,, "Dan",3.0,4.0,4.5,,3.0,4.5,4.0,2.0 "Hailey",,4.0,1.0,4.0,,,4.0,1.0 "Jordyn",,4.5,4.0,5.0,5.0,4.5,4.0,4.0 "Sam",5.0,2.0,,3.0,5.0,4.0,5.0, "Veronica",3.0,,,5.0,4.0,2.5,3.0,''' csv_txt2 = '''"user","Kacey Musgraves","Imagine Dragons","Daft Punk","Lorde","Fall Out Boy" "David",,3,5,4,1 "Matt",,3,4,4,1 "Ben",4,3,,3,1 "Chris",4,4,4,3,1 "Tori",5,4,5,,3''' csv_txt3 = '''"user","Taylor Swift","PSY","Whitney Houston" "Amy",4,3,4 "Ben",5,2, "Clara",,3.5,4 "Daisy",5,,3''' df = None def load_csv_txt(): global df, csv_txt, csv_txt2, csv_txt3 df = pd.read_csv(StringIO(csv_txt3), header = 0 , index_col = "user" ) load_csv_txt() def computeSimilarity(goods1, goods2): '''根据《data minning guide》第71页的公式s(i,j)''' df2 = df[[goods1, goods2]].sub(df.mean(axis = 1 ), axis = 0 ).dropna(axis = 0 ) return sum (df2[goods1] * df2[goods2]) / ( sum (df2[goods1] ** 2 ) * sum (df2[goods2] ** 2 )) ** 0.5 def p(user, goods): '''根据《data minning guide》第75页的公式p(u,i)''' assert pd.isnull(df.ix[user, goods]) s1 = df.ix[user, df.ix[user].notnull()] s2 = s1.index.to_series(). apply ( lambda x:computeSimilarity(x, goods)) return sum (s1 * s2) / sum ( abs (s2)) def rate2newrate(rate): '''根据《data minning guide》第76页的公式NR(u,N)''' ma, mi = df. max (). max (), df. min (). min () return ( 2 * (rate - mi) - (ma - mi)) / (ma - mi) def newrate2rate(new_rate): '''根据《data minning guide》第76页的公式R(u,N)''' ma, mi = df. max (). max (), df. min (). min () return ( 0.5 * (new_rate + 1 ) * (ma - mi)) + mi print ( ' \n 测试:计算3的new_rate值' ) print (rate2newrate( 3 )) print ( ' \n 测试:计算0.5的rate值' ) print (newrate2rate( 0.5 )) def p2(user, goods): '''根据《data minning guide》第75页的公式p(u,i)''' assert pd.isnull(df.ix[user, goods]) s1 = df.ix[user, df.ix[user].notnull()] s1 = s1. apply ( lambda x:rate2newrate(x)) s2 = s1.index.to_series(). apply ( lambda x:computeSimilarity(x, goods)) return newrate2rate( sum (s1 * s2) / sum ( abs (s2))) def dev(goods1, goods2): '''根据《data minning guide》第80页的公式dev(i,j)''' s = (df[goods1] - df[goods2]).dropna() d = sum (s) / s.size return d, s.size def get_dev_table(): '''根据《data minning guide》第87页的表''' goods_names = df.columns.tolist() df2 = pd.DataFrame( . 0 , index = goods_names, columns = goods_names) for i,goods1 in enumerate (goods_names): for goods2 in goods_names[i +1 :]: d, _ = dev(goods1, goods2) df2.ix[goods1, goods2] = d df2.ix[goods2, goods1] = - d return df2 print ( ' \n 测试:计算所有两两物品之间的评分差异表' ) print (get_dev_table()) def slopeone(user, goods): '''根据《data minning guide》第82页的公式p(u,j)''' s1 = df.ix[user].dropna() s2 = s1.index.to_series(). apply ( lambda x:dev(goods, x)) s3 = s2. apply ( lambda x:x[ 0 ]) s4 = s2. apply ( lambda x:x[ 1 ]) return sum ((s1 + s3) * s4) / sum (s4) print ( ' \n 测试:预测用户Ben对物品Whitney Houston的评分' ) print (slopeone( 'Ben' , 'Whitney Houston' )) http://www.cnblogs.com/hhh5460/p/6121918.html ,如需转载请自行联系原作者