模拟国际会议演讲稿

时间:2024.4.8

Recsplorer:Recommendation Algorithms Based on Precedence Mining

1. Introduction

Thank you very much, Dr. Li, for your kind introduction. Ladies and gentlemen, Good morning! I am honored to have been invited to speak at this conference. Before I start my speech, let me ask a question. Do you think recomemdations from others are useful for your internet shopping? Thank you. It is obvious that recommendations play an important role in our daily consumption decisions.

Today, my topic is about Recommendation Algorithms Based on Precedence Mining. I want to share our interesting research result on recommendation algorithms with you. The content of this presentation is divided into 5 parts: in session 1, I will intruduce the tradictional recommendation and our new strategy; in session 2, I will give the formal definition of Precedence Mining; in session 3, I will talk about the novel recommendation algorithms; experimental result will be showed in session 4; and finally, I will make a conclusion.

2. Body

Session 1: Introduction

The picture on this slide is an instance of recommemdation application on amazon.

Recommender systems provide advice on products, movies,web pages, and many other topics, and have become popular in many sites, such as Amazon. Many systems use collaborative filtering methods. The main process of CF is organized as follow: first, identify users similar to target user; second, recommend items based on the similar users. Unfortunately, the order of consumed items is neglect. In our paper, we consider a new recommendation strategy based on precedence patterns. These patterns may encompass user preferences, encode some logical order of options and capture how interests evolve.

Precedence mining model estimate the probability of user future consumption based on past behavior. And these probabilities are used to make recommendations. Through our experiment, precedence mining can significantly improve recommendation performance. Futhermore, it does not suffer from the sparsity of ratings problem and exploit patterns across all users, not just similar users.

This slide demonstrates the differences between collaborative filtering and precedence mining. Suppose that the scenario is about course selection. Each quarter/semester a student chooses a course, and rates it from 1 to 5. Figure a) shows five transcripts, a transcript means a list of course. U is our target student who need recommendations. Figure b) illustrates how CF work. Assume similar users share at least two common courses and have similar rating, then u3 and u4 are similar to u, and their common course h will be a recommendation to u. Figure c) presents how precedence mining work. For this example, we consider patterns where one course follows another. Suppose patterns occour at least two transcrips are recognized as significant, then (a,d), (e,f) and (g,h) are found out. And d, h, and f are recommendation to u who has taken a, g and e.

Now I will a probabilistic framework to solve the precedence mining problems. Our target user has selected course a , we want to compute the probability course x will follow, i.e., Pr[x|a]. howerve, what we really need to calculate is Pr[x|a﹁X] rather than Pr[x|a]. Because in our context, we are deciding if x is a good recommendation for the target user that has taken a. Thus we know that our target user’s transcript does not have x before a. For instance, the transcript no. 5 will be omitted. In more common situation, our target user has taken a list of courses, T = {a,b,c,…} not just a. Thus, what really need is Pr[x|T﹁X]. The question is how to figure out this probability. I will answer it later.

Session 2: Precedence Mining

We consider a set D of distinct courses. We use lowercase letters (e.g., a, b, … ) to refer to courses in D. A transcript T is a sequence of courses, e.g., a -> b -> c -> d. Then the definition of Top-k Recommendation Problem is as follows. Given a set transcripts over D for n users, the extra transcript T of a target user, and a desired number of recommendations k, our goal is to:

1. Assign a score score(x) (between 0 and 1) to every course x ∈ D that reflects how likely it is the target student will be interested in taking x. If x ∈ T , then score(x) = 0.

2. Using the score function, select the top k courses to recommend to the target user.

To compute scores, we propose to use the following statistics, where x, y ∈ D:

f(x): the number of transcripts that contain x.

g(x; y): the number of transcripts in which x precedes course y.

This slide shows the calculation result of f(x) and g(x,y). For example, from the table, we know that f(a) is 10 and g(a,c) is 3.

We propose a precedence mining model to solve the Top-k Recommendation Problem. Here are some notation: x﹁y, which we have memtioned in session 1, refers to transcript where x occurs without a preceding y; x﹁y refers to transcript where x occurs without y following it. We use quantities f(x) and g(x,y) to compte probabilities that encode the precedence information. For instance, from formular 1 to 7. I would not tell the detail of all formulars. We just pay attention to formular 5, note that this quantity above is the same as: Pr[x﹁y |y﹁x] which will be used to compute score(x).

As we know, the target user usually has taken a list of courses rather than a course, so we need to extent our probability calculation formulars. For example, suppose T={a,b}, Pr[x﹁T] the probability x occurs without either an a or b preceding it; Pr[x﹁T] the probability x occurs without either an a or b following it. This probability can be calculated exactly. So how to calculate it?

Session 3: Recommendation Algorithms

Let’s review session 2. The main goal of the recommendation algorithms is to calculate the score(x), and then select the top k courses based on these scores. Traditional recommendation algorithms compute a recommendation score for a course x in D only based on its frequency of occurence. It does not take into account the courses taken by the target user.

Our recommendation algorithms called SingleMC conquer the shortcoming of the traditional ones. It computes the score(x) using the formular 5. The detail is as follows: a student with a transcrip T of taken courses, for the course y ∈ T, if y and x appear together in transcripts satisfies the threshold θ, then compute the Pr[x﹁y |y﹁x], reflecting the likelihood the student will take course x and ignoring the effect of the other courses in T; finally the maximum of Pr[x﹁y |y﹁x] is choosen as the score(x).

Here is the calculation formular of score(x) of SignleMC. For example, with the higer score, d will be recommended.

Another new recommendation algorithm named Joint Probabilities algorithm, JointP for short, is proposed. Unlike SingleMC, JointP takes into account the complete set of courses in a transcript. In formular 12, we cannot compute its quantity exactly, Remember this problem we have mentioned. Our solution is to use approximations. This slide is about the first approximating formular. And this the second approximating formular.

The system is courseRand, and data set for experiment contains 7,500 transcripts.

This slide shows the new recommendation algoritms with black color and the traditional ones with blue color.

The chart on this slide indicates our new recommendation algorithms beat the traditional ones in precision, because the former ones exploit patterns across all users, while the latter ones just use the similar users.

The chart on this slide points out our new recommendation algorithms also beat the traditional ones in coverage for the same reason.

Session 5: Conclusion and Summary

In conclusion, we proposed a novel precedence mining model, developed a probabilistic framework for making recommendations and implemented a suite of recommendation algorithms that use the precedence information. Experimental result shows that our new algorithms perform better than the traditional ones, and our recommendation system can be easily generalized to other scenarios, such as purchases of books, DVDs and electronic equitment.

To sum up, first, I introduced the motivation and the outline of work; second, I gave the definition of precedence mining model; third, I described some new recommendation algorithms using precedence information; forth, I showed our experimental results to compare the new algorithms with traditional ones. Finally, I made a conclusion of our work..

That’s all. Thank you! Are there any questions?

更多相关推荐:
国际学术会议发言稿

1PrologueThankyouMrChairmanforyourgraciousintroductionIamhonoredtohavethechancetoaddressyouonthisspecialoccasionThe...

国际会议作报告英语发言稿

ThankyouprofMynameisImfromIamverypleasedtobeheretojointhisforumThetopicofmypresentationispropertiesofrapidconstruct...

英文国际学术会议开幕词演讲稿

InternationalConferenceonSpaceTechnologyDistinguishedguestsdistinguisheddelegatesladiesandgentlemenandallthefriends...

国际会议演讲稿

FreezethawcycletestanddamagemechanicsmodelsofalkaliactivatedslagconcreteThankyouforyourinvitationandwarmh...

英文国际会议讲稿

PPT1大家上午好今天我汇报的主题是基于改进型LBP算法的运动目标检测系统运动目标检测技术能降低视频监控的人力成本提高监控效率同时也是运动目标提取跟踪及识别算法的基础图像信号具有数据量大实时性要求高等特征随着算...

模拟国际会议主持人讲稿

LadiesandGentlemenIhavethehonorandthepleasureonbehalfoftheOrganizingCommitteetoextendourcordialwelcometoa...

英文国际学术会议开幕词演讲稿

InternationalConferenceonRemoteSensingTechnologyDistinguishedguestsdistinguisheddelegatesladiesandgentlemenandallth...

20xxMTI译稿—国际会议学校领导发言稿(原文)20xx06

学校领导发言稿原文各位专家学者同志们人间六月芳菲尽琴园荷花始盛开欢迎大家来到美丽的江城沙湖之滨首先我对这次外语教育与跨文化交际学术研讨会的召开表示祝贺向与会的全体代表表示亲切的问候湖北大学是湖北省属重点综合性大...

五一国际劳动节大会讲话稿

五一国际劳动节大会讲话稿五一国际劳动节大会讲话稿同志们今天我们在这里欢聚一堂共同庆祝五一国际劳动节隆重表彰一批在各行各业涌现出来的先进集体和先进个人并授予xxxx市五一劳动奖状奖章和工人先锋号荣誉称号在此我代表...

国际高分子物理学术会议

国际高分子物理学术讨论会PP3920xxChengdu20xx年6月48日高分子物理与化学国家重点实验室与国际著名大学联合在我国组织的系列性国际高分子物理学术讨论会迄今已成功举办了九届在国内外同行中已形成良好影...

潘一山校长在第一届液压技术与矿山装备国际学术会议开幕式上的讲话

在第一届液压技术与矿山装备国际学术会议上的欢迎辞校长潘一山20xx年8月17日尊敬的王部长各位领导专家学者朋友们今天我们在美丽的渤海之滨葫芦岛市隆重举行第一届液压技术与矿山装备国际学术会议我代表辽宁工程技术大学...

在纪念孔子诞辰2565周年国际学术研讨会暨国际儒学联合会第五届会员大会开幕会上的讲话

在纪念孔子诞辰2565周年国际学术研讨会暨国际儒学联合会第五届会员大会开幕会上的讲话20xx年9月24日各位嘉宾各位专家学者有朋自远方来不亦乐乎今天来自中国和世界各地的嘉宾和专家学者齐聚北京举行纪念孔子诞辰25...

国际会议演讲稿(12篇)