時(shí)間:4月9日上午9:00-10:30
騰訊會議號:342 825 772
報(bào)告內(nèi)容簡介:
Should firms that use machine learning algorithms for decision-making make their algorithms transparent? Despite growing calls for algorithmic transparency, most firms have kept their algorithms opaque, citing potential gaming by users that may negatively affect the algorithms' predictive power. We develop an analytical model to compare firm and user surplus with and without algorithmic transparency in the presence of strategic users and present novel insights. We identify a broad set of conditions under which making the algorithm transparent benefits the firm. We show that, in some cases, even the predictive power of the algorithm may increase if the firm makes the algorithm transparent. By contrast, users may not always be better off under algorithmic transparency. The results hold even when the predictive power of the opaque algorithm comes largely from correlational features and the cost for users to improve on them is close to zero. Overall, our results show that firms should not view manipulation by users as bad. Rather, they should use algorithmic transparency as a lever to motivate users to invest in more desirable features.
報(bào)告人簡介:
黃彥博士,卡內(nèi)基梅隆大學(xué)泰珀商學(xué)院助理教授。研究興趣在于使用定量方法來研究技術(shù)的經(jīng)濟(jì)和社會影響,特別是人工智能、機(jī)器學(xué)習(xí)和基于人群的技術(shù),以及它們背后的機(jī)制?;谶@些理解,提出建議戰(zhàn)略和政策,促進(jìn)生產(chǎn)和合理使用技術(shù),并確定技術(shù)支持平臺和應(yīng)用的有效設(shè)計(jì)。黃彥博士在清華大學(xué)獲得學(xué)士學(xué)位,在卡內(nèi)基梅隆大學(xué)獲得博士學(xué)位。
(承辦:管理工程系、科研與學(xué)術(shù)交流中心)